Processed 2 GNN file(s) from directory: src/gnn/examples
Search pattern used: **/*.md
src/gnn/examples/pymdp_pomdp_agent.mdsrc/gnn/examples/rxinfer_multiagent_gnn.mdPath: src/gnn/examples/pymdp_pomdp_agent.md
Path: src/gnn/examples/rxinfer_multiagent_gnn.md
Checked 2 files, 2 valid, 0 invalid
Analyzed 2 files Average Memory Usage: 0.50 KB Average Inference Time: 218.62 units Average Storage: 5.29 KB
Path: src/gnn/examples/pymdp_pomdp_agent.md Memory Estimate: 0.48 KB Inference Estimate: 154.07 units Storage Estimate: 3.83 KB
Path: src/gnn/examples/rxinfer_multiagent_gnn.md Memory Estimate: 0.52 KB Inference Estimate: 283.16 units Storage Estimate: 6.76 KB
t to one at t+1). Indicates the degree to which the model's behavior depends on past states or sequences.View standalone: resource_report_detailed.html
{
"/home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn/examples/pymdp_pomdp_agent.md": {
"file": "/home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn/examples/pymdp_pomdp_agent.md",
"model_name": "Multifactor PyMDP Agent v1",
"memory_estimate": 0.484375,
"inference_estimate": 154.06988264859797,
"storage_estimate": 3.82846875,
"flops_estimate": {
"total_flops": 1050.0,
"matrix_operations": 0,
"element_operations": 0,
"nonlinear_operations": 0
},
"inference_time_estimate": {
"cpu_time_seconds": 2.1e-08,
"cpu_time_ms": 2.1e-05,
"cpu_time_us": 0.020999999999999998
},
"batched_inference_estimate": {
"batch_1": {
"flops": 1050.0,
"time_seconds": 2.1e-08,
"throughput_per_second": 47619047.61904762
},
"batch_8": {
"flops": 6674.971489500035,
"time_seconds": 1.334994297900007e-07,
"throughput_per_second": 59925349.58826627
},
"batch_32": {
"flops": 25518.25782075925,
"time_seconds": 5.10365156415185e-07,
"throughput_per_second": 62700205.13306323
},
"batch_128": {
"flops": 99830.77636640746,
"time_seconds": 1.9966155273281492e-06,
"throughput_per_second": 64108486.710652955
},
"batch_512": {
"flops": 394234.3967437306,
"time_seconds": 7.884687934874611e-06,
"throughput_per_second": 64935987.85760216
}
},
"model_overhead": {
"compilation_ms": 79,
"optimization_ms": 240.5,
"memory_overhead_kb": 2.572265625
},
"complexity": {
"state_space_complexity": 6.965784284662087,
"graph_density": 0.004761904761904762,
"avg_in_degree": 1.0,
"avg_out_degree": 1.0,
"max_in_degree": 1,
"max_out_degree": 1,
"cyclic_complexity": 0,
"temporal_complexity": 0.0,
"equation_complexity": 8.76,
"overall_complexity": 8.741273094711996,
"variable_count": 21,
"edge_count": 2,
"total_state_space_dim": 124,
"max_variable_dim": 27
},
"model_info": {
"variables_count": 21,
"edges_count": 2,
"time_spec": "Dynamic",
"equation_count": 5
}
},
"/home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn/examples/rxinfer_multiagent_gnn.md": {
"file": "/home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn/examples/rxinfer_multiagent_gnn.md",
"model_name": "Multi-agent Trajectory Planning",
"memory_estimate": 0.5166015625,
"inference_estimate": 283.1611446514433,
"storage_estimate": 6.7573515625,
"flops_estimate": {
"total_flops": 20.0,
"matrix_operations": 0,
"element_operations": 8,
"nonlinear_operations": 0
},
"inference_time_estimate": {
"cpu_time_seconds": 4e-10,
"cpu_time_ms": 4.0000000000000003e-07,
"cpu_time_us": 0.0004
},
"batched_inference_estimate": {
"batch_1": {
"flops": 20.0,
"time_seconds": 4e-10,
"throughput_per_second": 2500000000.0
},
"batch_8": {
"flops": 127.14231408571496,
"time_seconds": 2.5428462817142993e-09,
"throughput_per_second": 3146080853.383979
},
"batch_32": {
"flops": 486.0620537287476,
"time_seconds": 9.721241074574952e-09,
"throughput_per_second": 3291760769.48582
},
"batch_128": {
"flops": 1901.5385974553803,
"time_seconds": 3.8030771949107605e-08,
"throughput_per_second": 3365695552.30928
},
"batch_512": {
"flops": 7509.226604642487,
"time_seconds": 1.5018453209284973e-07,
"throughput_per_second": 3409139362.5241137
}
},
"model_overhead": {
"compilation_ms": 206,
"optimization_ms": 1820.0,
"memory_overhead_kb": 5.423828125
},
"complexity": {
"state_space_complexity": 6.820178962415188,
"graph_density": 0.0002824858757062147,
"avg_in_degree": 1.0,
"avg_out_degree": 1.0,
"max_in_degree": 1,
"max_out_degree": 1,
"cyclic_complexity": 0,
"temporal_complexity": 0.0,
"equation_complexity": 3.2577777777777777,
"overall_complexity": 5.364897390812113,
"variable_count": 60,
"edge_count": 1,
"total_state_space_dim": 112,
"max_variable_dim": 16
},
"model_info": {
"variables_count": 60,
"edges_count": 1,
"time_spec": "Dynamic",
"equation_count": 15
}
}
}resource_data.json🗓️ Generated: 2025-06-07 08:30:47
src/gnn/examplesoutput/gnn_exports{
"file_path": "/home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn/examples/pymdp_pomdp_agent.md",
"name": "Multifactor PyMDP Agent v1",
"metadata": {
"description": "This model represents a PyMDP agent with multiple observation modalities and hidden state factors.\n- Observation modalities: \"state_observation\" (3 outcomes), \"reward\" (3 outcomes), \"decision_proprioceptive\" (3 outcomes)\n- Hidden state factors: \"reward_level\" (2 states), \"decision_state\" (3 states)\n- Control: \"decision_state\" factor is controllable with 3 possible actions.\nThe parameterization is derived from a PyMDP Python script example."
},
"states": [
{
"id": "A_m0",
"dimensions": "3,2,3,type=float",
"original_id": "A_m0"
},
{
"id": "A_m1",
"dimensions": "3,2,3,type=float",
"original_id": "A_m1"
},
{
"id": "A_m2",
"dimensions": "3,2,3,type=float",
"original_id": "A_m2"
},
{
"id": "B_f0",
"dimensions": "2,2,1,type=float",
"original_id": "B_f0"
},
{
"id": "B_f1",
"dimensions": "3,3,3,type=float",
"original_id": "B_f1"
},
{
"id": "C_m0",
"dimensions": "3,type=float",
"original_id": "C_m0"
},
{
"id": "C_m1",
"dimensions": "3,type=float",
"original_id": "C_m1"
},
{
"id": "C_m2",
"dimensions": "3,type=float",
"original_id": "C_m2"
},
{
"id": "D_f0",
"dimensions": "2,type=float",
"original_id": "D_f0"
},
{
"id": "D_f1",
"dimensions": "3,type=float",
"original_id": "D_f1"
},
{
"id": "s_f0",
"dimensions": "2,1,type=float",
"original_id": "s_f0"
},
{
"id": "s_f1",
"dimensions": "3,1,type=float",
"original_id": "s_f1"
},
{
"id": "s_prime_f0",
"dimensions": "2,1,type=float",
"original_id": "s_prime_f0"
},
{
"id": "s_prime_f1",
"dimensions": "3,1,type=float",
"original_id": "s_prime_f1"
},
{
"id": "o_m0",
"dimensions": "3,1,type=float",
"original_id": "o_m0"
},
{
"id": "o_m1",
"dimensions": "3,1,type=float",
"original_id": "o_m1"
},
{
"id": "o_m2",
"dimensions": "3,1,type=float",
"original_id": "o_m2"
},
{
"id": "u_f1",
"dimensions": "1,type=int",
"original_id": "u_f1"
},
{
"id": "G",
"dimensions": "1,type=float",
"original_id": "G"
},
{
"id": "t",
"dimensions": "1,type=int",
"original_id": "t"
}
],
"parameters": {},
"initial_parameters": {},
"observations": [],
"transitions": [
{
"sources": [
"D_f0",
"D_f1"
],
"operator": "-",
"targets": [
"s_f0",
"s_f1"
],
"attributes": {}
},
{
"sources": [
"s_f0",
"s_f1"
],
"operator": "-",
"targets": [
"A_m0",
"A_m1",
"A_m2"
],
"attributes": {}
},
{
"sources": [
"A_m0",
"A_m1",
"A_m2"
],
"operator": "-",
"targets": [
"o_m0",
"o_m1",
"o_m2"
],
"attributes": {}
},
{
"sources": [
"B_f0",
"B_f1"
],
"operator": "-",
"targets": [
"s_prime_f0",
"s_prime_f1"
],
"attributes": {}
},
{
"sources": [
"C_m0",
"C_m1",
"C_m2"
],
"operator": ">",
"targets": [
"G"
],
"attributes": {}
}
],
"ontology_annotations": {
"A_m0": "LikelihoodMatrixModality0",
"A_m1": "LikelihoodMatrixModality1",
"A_m2": "LikelihoodMatrixModality2",
"B_f0": "TransitionMatrixFactor0",
"B_f1": "TransitionMatrixFactor1",
"C_m0": "LogPreferenceVectorModality0",
"C_m1": "LogPreferenceVectorModality1",
"C_m2": "LogPreferenceVectorModality2",
"D_f0": "PriorOverHiddenStatesFactor0",
"D_f1": "PriorOverHiddenStatesFactor1",
"s_f0": "HiddenStateFactor0",
"s_f1": "HiddenStateFactor1",
"s_prime_f0": "NextHiddenStateFactor0",
"s_prime_f1": "NextHiddenStateFactor1",
"o_m0": "ObservationModality0",
"o_m1": "ObservationModality1",
"o_m2": "ObservationModality2",
"\u03c0_f1": "PolicyVectorFactor1 # Distribution over actions for factor 1",
"u_f1": "ActionFactor1 # Chosen action for factor 1",
"G": "ExpectedFreeEnergy"
},
"equations_text": "",
"time_info": {
"DiscreteTime": "t",
"ModelTimeHorizon": "Unbounded # Agent definition is generally unbounded, specific simulation runs have a horizon."
},
"footer_text": "",
"signature": {},
"raw_sections": {
"GNNSection": "MultifactorPyMDPAgent",
"GNNVersionAndFlags": "GNN v1",
"ModelName": "Multifactor PyMDP Agent v1",
"ModelAnnotation": "This model represents a PyMDP agent with multiple observation modalities and hidden state factors.\n- Observation modalities: \"state_observation\" (3 outcomes), \"reward\" (3 outcomes), \"decision_proprioceptive\" (3 outcomes)\n- Hidden state factors: \"reward_level\" (2 states), \"decision_state\" (3 states)\n- Control: \"decision_state\" factor is controllable with 3 possible actions.\nThe parameterization is derived from a PyMDP Python script example.",
"StateSpaceBlock": "# A_matrices are defined per modality: A_m[observation_outcomes, state_factor0_states, state_factor1_states]\nA_m0[3,2,3,type=float] # Likelihood for modality 0 (\"state_observation\")\nA_m1[3,2,3,type=float] # Likelihood for modality 1 (\"reward\")\nA_m2[3,2,3,type=float] # Likelihood for modality 2 (\"decision_proprioceptive\")\n\n# B_matrices are defined per hidden state factor: B_f[states_next, states_previous, actions]\nB_f0[2,2,1,type=float] # Transitions for factor 0 (\"reward_level\"), 1 implicit action (uncontrolled)\nB_f1[3,3,3,type=float] # Transitions for factor 1 (\"decision_state\"), 3 actions\n\n# C_vectors are defined per modality: C_m[observation_outcomes]\nC_m0[3,type=float] # Preferences for modality 0\nC_m1[3,type=float] # Preferences for modality 1\nC_m2[3,type=float] # Preferences for modality 2\n\n# D_vectors are defined per hidden state factor: D_f[states]\nD_f0[2,type=float] # Prior for factor 0\nD_f1[3,type=float] # Prior for factor 1\n\n# Hidden States\ns_f0[2,1,type=float] # Hidden state for factor 0 (\"reward_level\")\ns_f1[3,1,type=float] # Hidden state for factor 1 (\"decision_state\")\ns_prime_f0[2,1,type=float] # Next hidden state for factor 0\ns_prime_f1[3,1,type=float] # Next hidden state for factor 1\n\n# Observations\no_m0[3,1,type=float] # Observation for modality 0\no_m1[3,1,type=float] # Observation for modality 1\no_m2[3,1,type=float] # Observation for modality 2\n\n# Policy and Control\n\u03c0_f1[3,type=float] # Policy (distribution over actions) for controllable factor 1\nu_f1[1,type=int] # Action taken for controllable factor 1\nG[1,type=float] # Expected Free Energy (overall, or can be per policy)\nt[1,type=int] # Time step",
"Connections": "(D_f0,D_f1)-(s_f0,s_f1)\n(s_f0,s_f1)-(A_m0,A_m1,A_m2)\n(A_m0,A_m1,A_m2)-(o_m0,o_m1,o_m2)\n(s_f0,s_f1,u_f1)-(B_f0,B_f1) # u_f1 primarily affects B_f1; B_f0 is uncontrolled\n(B_f0,B_f1)-(s_prime_f0,s_prime_f1)\n(C_m0,C_m1,C_m2)>G\nG>\u03c0_f1\n\u03c0_f1-u_f1\nG=ExpectedFreeEnergy\nt=Time",
"InitialParameterization": "# A_m0: num_obs[0]=3, num_states[0]=2, num_states[1]=3. Format: A[obs_idx][state_f0_idx][state_f1_idx]\n# A[0][:, :, 0] = np.ones((3,2))/3\n# A[0][:, :, 1] = np.ones((3,2))/3\n# A[0][:, :, 2] = [[0.8,0.2],[0.0,0.0],[0.2,0.8]] (obs x state_f0 for state_f1=2)\nA_m0={\n ( (0.33333,0.33333,0.8), (0.33333,0.33333,0.2) ), # obs=0; (vals for s_f1 over s_f0=0), (vals for s_f1 over s_f0=1)\n ( (0.33333,0.33333,0.0), (0.33333,0.33333,0.0) ), # obs=1\n ( (0.33333,0.33333,0.2), (0.33333,0.33333,0.8) ) # obs=2\n}\n\n# A_m1: num_obs[1]=3, num_states[0]=2, num_states[1]=3\n# A[1][2, :, 0] = [1.0,1.0]\n# A[1][0:2, :, 1] = softmax([[1,0],[0,1]]) approx [[0.731,0.269],[0.269,0.731]]\n# A[1][2, :, 2] = [1.0,1.0]\n# Others are 0.\nA_m1={\n ( (0.0,0.731,0.0), (0.0,0.269,0.0) ), # obs=0\n ( (0.0,0.269,0.0), (0.0,0.731,0.0) ), # obs=1\n ( (1.0,0.0,1.0), (1.0,0.0,1.0) ) # obs=2\n}\n\n# A_m2: num_obs[2]=3, num_states[0]=2, num_states[1]=3\n# A[2][0,:,0]=1.0; A[2][1,:,1]=1.0; A[2][2,:,2]=1.0\n# Others are 0.\nA_m2={\n ( (1.0,0.0,0.0), (1.0,0.0,0.0) ), # obs=0\n ( (0.0,1.0,0.0), (0.0,1.0,0.0) ), # obs=1\n ( (0.0,0.0,1.0), (0.0,0.0,1.0) ) # obs=2\n}\n\n# B_f0: factor 0 (2 states), uncontrolled (1 action). Format B[s_next, s_prev, action=0]\n# B_f0 = eye(2)\nB_f0={\n ( (1.0),(0.0) ), # s_next=0; (vals for s_prev over action=0)\n ( (0.0),(1.0) ) # s_next=1\n}\n\n# B_f1: factor 1 (3 states), 3 actions. Format B[s_next, s_prev, action_idx]\n# B_f1[:,:,action_idx] = eye(3) for each action\nB_f1={\n ( (1.0,1.0,1.0), (0.0,0.0,0.0), (0.0,0.0,0.0) ), # s_next=0; (vals for actions over s_prev=0), (vals for actions over s_prev=1), ...\n ( (0.0,0.0,0.0), (1.0,1.0,1.0), (0.0,0.0,0.0) ), # s_next=1\n ( (0.0,0.0,0.0), (0.0,0.0,0.0), (1.0,1.0,1.0) ) # s_next=2\n}\n\n# C_m0: num_obs[0]=3. Defaults to zeros.\nC_m0={(0.0,0.0,0.0)}\n\n# C_m1: num_obs[1]=3. C[1][0]=1.0, C[1][1]=-2.0\nC_m1={(1.0,-2.0,0.0)}\n\n# C_m2: num_obs[2]=3. Defaults to zeros.\nC_m2={(0.0,0.0,0.0)}\n\n# D_f0: factor 0 (2 states). Uniform prior.\nD_f0={(0.5,0.5)}\n\n# D_f1: factor 1 (3 states). Uniform prior.\nD_f1={(0.33333,0.33333,0.33333)}",
"InitialParameterization_raw_content": "# A_m0: num_obs[0]=3, num_states[0]=2, num_states[1]=3. Format: A[obs_idx][state_f0_idx][state_f1_idx]\n# A[0][:, :, 0] = np.ones((3,2))/3\n# A[0][:, :, 1] = np.ones((3,2))/3\n# A[0][:, :, 2] = [[0.8,0.2],[0.0,0.0],[0.2,0.8]] (obs x state_f0 for state_f1=2)\nA_m0={\n ( (0.33333,0.33333,0.8), (0.33333,0.33333,0.2) ), # obs=0; (vals for s_f1 over s_f0=0), (vals for s_f1 over s_f0=1)\n ( (0.33333,0.33333,0.0), (0.33333,0.33333,0.0) ), # obs=1\n ( (0.33333,0.33333,0.2), (0.33333,0.33333,0.8) ) # obs=2\n}\n\n# A_m1: num_obs[1]=3, num_states[0]=2, num_states[1]=3\n# A[1][2, :, 0] = [1.0,1.0]\n# A[1][0:2, :, 1] = softmax([[1,0],[0,1]]) approx [[0.731,0.269],[0.269,0.731]]\n# A[1][2, :, 2] = [1.0,1.0]\n# Others are 0.\nA_m1={\n ( (0.0,0.731,0.0), (0.0,0.269,0.0) ), # obs=0\n ( (0.0,0.269,0.0), (0.0,0.731,0.0) ), # obs=1\n ( (1.0,0.0,1.0), (1.0,0.0,1.0) ) # obs=2\n}\n\n# A_m2: num_obs[2]=3, num_states[0]=2, num_states[1]=3\n# A[2][0,:,0]=1.0; A[2][1,:,1]=1.0; A[2][2,:,2]=1.0\n# Others are 0.\nA_m2={\n ( (1.0,0.0,0.0), (1.0,0.0,0.0) ), # obs=0\n ( (0.0,1.0,0.0), (0.0,1.0,0.0) ), # obs=1\n ( (0.0,0.0,1.0), (0.0,0.0,1.0) ) # obs=2\n}\n\n# B_f0: factor 0 (2 states), uncontrolled (1 action). Format B[s_next, s_prev, action=0]\n# B_f0 = eye(2)\nB_f0={\n ( (1.0),(0.0) ), # s_next=0; (vals for s_prev over action=0)\n ( (0.0),(1.0) ) # s_next=1\n}\n\n# B_f1: factor 1 (3 states), 3 actions. Format B[s_next, s_prev, action_idx]\n# B_f1[:,:,action_idx] = eye(3) for each action\nB_f1={\n ( (1.0,1.0,1.0), (0.0,0.0,0.0), (0.0,0.0,0.0) ), # s_next=0; (vals for actions over s_prev=0), (vals for actions over s_prev=1), ...\n ( (0.0,0.0,0.0), (1.0,1.0,1.0), (0.0,0.0,0.0) ), # s_next=1\n ( (0.0,0.0,0.0), (0.0,0.0,0.0), (1.0,1.0,1.0) ) # s_next=2\n}\n\n# C_m0: num_obs[0]=3. Defaults to zeros.\nC_m0={(0.0,0.0,0.0)}\n\n# C_m1: num_obs[1]=3. C[1][0]=1.0, C[1][1]=-2.0\nC_m1={(1.0,-2.0,0.0)}\n\n# C_m2: num_obs[2]=3. Defaults to zeros.\nC_m2={(0.0,0.0,0.0)}\n\n# D_f0: factor 0 (2 states). Uniform prior.\nD_f0={(0.5,0.5)}\n\n# D_f1: factor 1 (3 states). Uniform prior.\nD_f1={(0.33333,0.33333,0.33333)}",
"Equations": "# Standard PyMDP agent equations for state inference (infer_states),\n# policy inference (infer_policies), and action sampling (sample_action).\n# qs = infer_states(o)\n# q_pi, efe = infer_policies()\n# action = sample_action()",
"Time": "Dynamic\nDiscreteTime=t\nModelTimeHorizon=Unbounded # Agent definition is generally unbounded, specific simulation runs have a horizon.",
"ActInfOntologyAnnotation": "A_m0=LikelihoodMatrixModality0\nA_m1=LikelihoodMatrixModality1\nA_m2=LikelihoodMatrixModality2\nB_f0=TransitionMatrixFactor0\nB_f1=TransitionMatrixFactor1\nC_m0=LogPreferenceVectorModality0\nC_m1=LogPreferenceVectorModality1\nC_m2=LogPreferenceVectorModality2\nD_f0=PriorOverHiddenStatesFactor0\nD_f1=PriorOverHiddenStatesFactor1\ns_f0=HiddenStateFactor0\ns_f1=HiddenStateFactor1\ns_prime_f0=NextHiddenStateFactor0\ns_prime_f1=NextHiddenStateFactor1\no_m0=ObservationModality0\no_m1=ObservationModality1\no_m2=ObservationModality2\n\u03c0_f1=PolicyVectorFactor1 # Distribution over actions for factor 1\nu_f1=ActionFactor1 # Chosen action for factor 1\nG=ExpectedFreeEnergy",
"ModelParameters": "num_hidden_states_factors: [2, 3] # s_f0[2], s_f1[3]\nnum_obs_modalities: [3, 3, 3] # o_m0[3], o_m1[3], o_m2[3]\nnum_control_factors: [1, 3] # B_f0 actions_dim=1 (uncontrolled), B_f1 actions_dim=3 (controlled by pi_f1)",
"Footer": "Multifactor PyMDP Agent v1 - GNN Representation",
"Signature": "NA"
},
"other_sections": {},
"gnnsection": {},
"gnnversionandflags": {},
"equations": "# Standard PyMDP agent equations for state inference (infer_states),\n# policy inference (infer_policies), and action sampling (sample_action).\n# qs = infer_states(o)\n# q_pi, efe = infer_policies()\n# action = sample_action()",
"ModelParameters": {
"num_hidden_states_factors": "[2, 3]",
"num_obs_modalities": "[3, 3, 3]",
"num_control_factors": "[1, 3]"
},
"num_hidden_states_factors": "[2, 3]",
"num_obs_modalities": "[3, 3, 3]",
"num_control_factors": "[1, 3]",
"footer": "Multifactor PyMDP Agent v1 - GNN Representation"
}pymdp_pomdp_agent.jsonGNN Model Summary: Multifactor PyMDP Agent v1
Source File: /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn/examples/pymdp_pomdp_agent.md
Metadata:
description: This model represents a PyMDP agent with multiple observation modalities and hidden state factors.
- Observation modalities: "state_observation" (3 outcomes), "reward" (3 outcomes), "decision_proprioceptive" (3 outcomes)
- Hidden state factors: "reward_level" (2 states), "decision_state" (3 states)
- Control: "decision_state" factor is controllable with 3 possible actions.
The parameterization is derived from a PyMDP Python script example.
States (20):
- ID: A_m0 (dimensions=3,2,3,type=float, original_id=A_m0)
- ID: A_m1 (dimensions=3,2,3,type=float, original_id=A_m1)
- ID: A_m2 (dimensions=3,2,3,type=float, original_id=A_m2)
- ID: B_f0 (dimensions=2,2,1,type=float, original_id=B_f0)
- ID: B_f1 (dimensions=3,3,3,type=float, original_id=B_f1)
- ID: C_m0 (dimensions=3,type=float, original_id=C_m0)
- ID: C_m1 (dimensions=3,type=float, original_id=C_m1)
- ID: C_m2 (dimensions=3,type=float, original_id=C_m2)
- ID: D_f0 (dimensions=2,type=float, original_id=D_f0)
- ID: D_f1 (dimensions=3,type=float, original_id=D_f1)
- ID: s_f0 (dimensions=2,1,type=float, original_id=s_f0)
- ID: s_f1 (dimensions=3,1,type=float, original_id=s_f1)
- ID: s_prime_f0 (dimensions=2,1,type=float, original_id=s_prime_f0)
- ID: s_prime_f1 (dimensions=3,1,type=float, original_id=s_prime_f1)
- ID: o_m0 (dimensions=3,1,type=float, original_id=o_m0)
- ID: o_m1 (dimensions=3,1,type=float, original_id=o_m1)
- ID: o_m2 (dimensions=3,1,type=float, original_id=o_m2)
- ID: u_f1 (dimensions=1,type=int, original_id=u_f1)
- ID: G (dimensions=1,type=float, original_id=G)
- ID: t (dimensions=1,type=int, original_id=t)
Initial Parameters (0):
General Parameters (0):
Observations (0):
Transitions (5):
- None -> None
- None -> None
- None -> None
- None -> None
- None -> None
Ontology Annotations (20):
A_m0 = LikelihoodMatrixModality0
A_m1 = LikelihoodMatrixModality1
A_m2 = LikelihoodMatrixModality2
B_f0 = TransitionMatrixFactor0
B_f1 = TransitionMatrixFactor1
C_m0 = LogPreferenceVectorModality0
C_m1 = LogPreferenceVectorModality1
C_m2 = LogPreferenceVectorModality2
D_f0 = PriorOverHiddenStatesFactor0
D_f1 = PriorOverHiddenStatesFactor1
s_f0 = HiddenStateFactor0
s_f1 = HiddenStateFactor1
s_prime_f0 = NextHiddenStateFactor0
s_prime_f1 = NextHiddenStateFactor1
o_m0 = ObservationModality0
o_m1 = ObservationModality1
o_m2 = ObservationModality2
π_f1 = PolicyVectorFactor1 # Distribution over actions for factor 1
u_f1 = ActionFactor1 # Chosen action for factor 1
G = ExpectedFreeEnergy
pymdp_pomdp_agent.txt{
"file_path": "/home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn/examples/rxinfer_multiagent_gnn.md",
"name": "Multi-agent Trajectory Planning",
"metadata": {
"description": "This model represents a multi-agent trajectory planning scenario in RxInfer.jl.\nIt includes:\n- State space model for agents moving in a 2D environment\n- Obstacle avoidance constraints\n- Goal-directed behavior\n- Inter-agent collision avoidance\nThe model can be used to simulate trajectory planning in various environments with obstacles."
},
"states": [
{
"id": "dt",
"dimensions": "1,type=float",
"original_id": "dt"
},
{
"id": "gamma",
"dimensions": "1,type=float",
"original_id": "gamma"
},
{
"id": "nr_steps",
"dimensions": "1,type=int",
"original_id": "nr_steps"
},
{
"id": "nr_iterations",
"dimensions": "1,type=int",
"original_id": "nr_iterations"
},
{
"id": "nr_agents",
"dimensions": "1,type=int",
"original_id": "nr_agents"
},
{
"id": "softmin_temperature",
"dimensions": "1,type=float",
"original_id": "softmin_temperature"
},
{
"id": "intermediate_steps",
"dimensions": "1,type=int",
"original_id": "intermediate_steps"
},
{
"id": "save_intermediates",
"dimensions": "1,type=bool",
"original_id": "save_intermediates"
},
{
"id": "A",
"dimensions": "4,4,type=float",
"original_id": "A"
},
{
"id": "B",
"dimensions": "4,2,type=float",
"original_id": "B"
},
{
"id": "C",
"dimensions": "2,4,type=float",
"original_id": "C"
},
{
"id": "initial_state_variance",
"dimensions": "1,type=float",
"original_id": "initial_state_variance"
},
{
"id": "control_variance",
"dimensions": "1,type=float",
"original_id": "control_variance"
},
{
"id": "goal_constraint_variance",
"dimensions": "1,type=float",
"original_id": "goal_constraint_variance"
},
{
"id": "gamma_shape",
"dimensions": "1,type=float",
"original_id": "gamma_shape"
},
{
"id": "gamma_scale_factor",
"dimensions": "1,type=float",
"original_id": "gamma_scale_factor"
},
{
"id": "x_limits",
"dimensions": "2,type=float",
"original_id": "x_limits"
},
{
"id": "y_limits",
"dimensions": "2,type=float",
"original_id": "y_limits"
},
{
"id": "fps",
"dimensions": "1,type=int",
"original_id": "fps"
},
{
"id": "heatmap_resolution",
"dimensions": "1,type=int",
"original_id": "heatmap_resolution"
},
{
"id": "plot_width",
"dimensions": "1,type=int",
"original_id": "plot_width"
},
{
"id": "plot_height",
"dimensions": "1,type=int",
"original_id": "plot_height"
},
{
"id": "agent_alpha",
"dimensions": "1,type=float",
"original_id": "agent_alpha"
},
{
"id": "target_alpha",
"dimensions": "1,type=float",
"original_id": "target_alpha"
},
{
"id": "color_palette",
"dimensions": "1,type=string",
"original_id": "color_palette"
},
{
"id": "door_obstacle_center_1",
"dimensions": "2,type=float",
"original_id": "door_obstacle_center_1"
},
{
"id": "door_obstacle_size_1",
"dimensions": "2,type=float",
"original_id": "door_obstacle_size_1"
},
{
"id": "door_obstacle_center_2",
"dimensions": "2,type=float",
"original_id": "door_obstacle_center_2"
},
{
"id": "door_obstacle_size_2",
"dimensions": "2,type=float",
"original_id": "door_obstacle_size_2"
},
{
"id": "wall_obstacle_center",
"dimensions": "2,type=float",
"original_id": "wall_obstacle_center"
},
{
"id": "wall_obstacle_size",
"dimensions": "2,type=float",
"original_id": "wall_obstacle_size"
},
{
"id": "combined_obstacle_center_1",
"dimensions": "2,type=float",
"original_id": "combined_obstacle_center_1"
},
{
"id": "combined_obstacle_size_1",
"dimensions": "2,type=float",
"original_id": "combined_obstacle_size_1"
},
{
"id": "combined_obstacle_center_2",
"dimensions": "2,type=float",
"original_id": "combined_obstacle_center_2"
},
{
"id": "combined_obstacle_size_2",
"dimensions": "2,type=float",
"original_id": "combined_obstacle_size_2"
},
{
"id": "combined_obstacle_center_3",
"dimensions": "2,type=float",
"original_id": "combined_obstacle_center_3"
},
{
"id": "combined_obstacle_size_3",
"dimensions": "2,type=float",
"original_id": "combined_obstacle_size_3"
},
{
"id": "agent1_id",
"dimensions": "1,type=int",
"original_id": "agent1_id"
},
{
"id": "agent1_radius",
"dimensions": "1,type=float",
"original_id": "agent1_radius"
},
{
"id": "agent1_initial_position",
"dimensions": "2,type=float",
"original_id": "agent1_initial_position"
},
{
"id": "agent1_target_position",
"dimensions": "2,type=float",
"original_id": "agent1_target_position"
},
{
"id": "agent2_id",
"dimensions": "1,type=int",
"original_id": "agent2_id"
},
{
"id": "agent2_radius",
"dimensions": "1,type=float",
"original_id": "agent2_radius"
},
{
"id": "agent2_initial_position",
"dimensions": "2,type=float",
"original_id": "agent2_initial_position"
},
{
"id": "agent2_target_position",
"dimensions": "2,type=float",
"original_id": "agent2_target_position"
},
{
"id": "agent3_id",
"dimensions": "1,type=int",
"original_id": "agent3_id"
},
{
"id": "agent3_radius",
"dimensions": "1,type=float",
"original_id": "agent3_radius"
},
{
"id": "agent3_initial_position",
"dimensions": "2,type=float",
"original_id": "agent3_initial_position"
},
{
"id": "agent3_target_position",
"dimensions": "2,type=float",
"original_id": "agent3_target_position"
},
{
"id": "agent4_id",
"dimensions": "1,type=int",
"original_id": "agent4_id"
},
{
"id": "agent4_radius",
"dimensions": "1,type=float",
"original_id": "agent4_radius"
},
{
"id": "agent4_initial_position",
"dimensions": "2,type=float",
"original_id": "agent4_initial_position"
},
{
"id": "agent4_target_position",
"dimensions": "2,type=float",
"original_id": "agent4_target_position"
},
{
"id": "experiment_seeds",
"dimensions": "2,type=int",
"original_id": "experiment_seeds"
},
{
"id": "results_dir",
"dimensions": "1,type=string",
"original_id": "results_dir"
},
{
"id": "animation_template",
"dimensions": "1,type=string",
"original_id": "animation_template"
},
{
"id": "control_vis_filename",
"dimensions": "1,type=string",
"original_id": "control_vis_filename"
},
{
"id": "obstacle_distance_filename",
"dimensions": "1,type=string",
"original_id": "obstacle_distance_filename"
},
{
"id": "path_uncertainty_filename",
"dimensions": "1,type=string",
"original_id": "path_uncertainty_filename"
},
{
"id": "convergence_filename",
"dimensions": "1,type=string",
"original_id": "convergence_filename"
}
],
"parameters": {},
"initial_parameters": {},
"observations": [],
"transitions": [
{
"sources": [
"dt"
],
"operator": ">",
"targets": [
"A"
],
"attributes": {}
},
{
"sources": [
"A",
"B",
"C"
],
"operator": ">",
"targets": [
"state_space_model"
],
"attributes": {}
},
{
"sources": [
"state_space_model",
"initial_state_variance",
"control_variance"
],
"operator": ">",
"targets": [
"agent_trajectories"
],
"attributes": {}
},
{
"sources": [
"agent_trajectories",
"goal_constraint_variance"
],
"operator": ">",
"targets": [
"goal_directed_behavior"
],
"attributes": {}
},
{
"sources": [
"agent_trajectories",
"gamma",
"gamma_shape",
"gamma_scale_factor"
],
"operator": ">",
"targets": [
"obstacle_avoidance"
],
"attributes": {}
},
{
"sources": [
"agent_trajectories",
"nr_agents"
],
"operator": ">",
"targets": [
"collision_avoidance"
],
"attributes": {}
},
{
"sources": [
"goal_directed_behavior",
"obstacle_avoidance",
"collision_avoidance"
],
"operator": ">",
"targets": [
"planning_system"
],
"attributes": {}
}
],
"ontology_annotations": {
"dt": "TimeStep",
"gamma": "ConstraintParameter",
"nr_steps": "TrajectoryLength",
"nr_iterations": "InferenceIterations",
"nr_agents": "NumberOfAgents",
"softmin_temperature": "SoftminTemperature",
"A": "StateTransitionMatrix",
"B": "ControlInputMatrix",
"C": "ObservationMatrix",
"initial_state_variance": "InitialStateVariance",
"control_variance": "ControlVariance",
"goal_constraint_variance": "GoalConstraintVariance"
},
"equations_text": "",
"time_info": {
"ModelTimeHorizon": "nr_steps"
},
"footer_text": "",
"signature": {
"Creator": "AI Assistant for GNN",
"Date": "2024-07-27",
"Status": "Example for RxInfer.jl multi-agent trajectory planning"
},
"raw_sections": {
"GNNSection": "RxInferMultiAgentTrajectoryPlanning",
"GNNVersionAndFlags": "GNN v1",
"ModelName": "Multi-agent Trajectory Planning",
"ModelAnnotation": "This model represents a multi-agent trajectory planning scenario in RxInfer.jl.\nIt includes:\n- State space model for agents moving in a 2D environment\n- Obstacle avoidance constraints\n- Goal-directed behavior\n- Inter-agent collision avoidance\nThe model can be used to simulate trajectory planning in various environments with obstacles.",
"StateSpaceBlock": "# Model parameters\ndt[1,type=float] # Time step for the state space model\ngamma[1,type=float] # Constraint parameter for the Halfspace node\nnr_steps[1,type=int] # Number of time steps in the trajectory\nnr_iterations[1,type=int] # Number of inference iterations\nnr_agents[1,type=int] # Number of agents in the simulation\nsoftmin_temperature[1,type=float] # Temperature parameter for the softmin function\nintermediate_steps[1,type=int] # Intermediate results saving interval\nsave_intermediates[1,type=bool] # Whether to save intermediate results\n\n# State space matrices\nA[4,4,type=float] # State transition matrix\nB[4,2,type=float] # Control input matrix\nC[2,4,type=float] # Observation matrix\n\n# Prior distributions\ninitial_state_variance[1,type=float] # Prior on initial state\ncontrol_variance[1,type=float] # Prior on control inputs\ngoal_constraint_variance[1,type=float] # Goal constraints variance\ngamma_shape[1,type=float] # Parameters for GammaShapeRate prior\ngamma_scale_factor[1,type=float] # Parameters for GammaShapeRate prior\n\n# Visualization parameters\nx_limits[2,type=float] # Plot boundaries (x-axis)\ny_limits[2,type=float] # Plot boundaries (y-axis)\nfps[1,type=int] # Animation frames per second\nheatmap_resolution[1,type=int] # Heatmap resolution\nplot_width[1,type=int] # Plot width\nplot_height[1,type=int] # Plot height\nagent_alpha[1,type=float] # Visualization alpha for agents\ntarget_alpha[1,type=float] # Visualization alpha for targets\ncolor_palette[1,type=string] # Color palette for visualization\n\n# Environment definitions\ndoor_obstacle_center_1[2,type=float] # Door environment, obstacle 1 center\ndoor_obstacle_size_1[2,type=float] # Door environment, obstacle 1 size\ndoor_obstacle_center_2[2,type=float] # Door environment, obstacle 2 center\ndoor_obstacle_size_2[2,type=float] # Door environment, obstacle 2 size\n\nwall_obstacle_center[2,type=float] # Wall environment, obstacle center\nwall_obstacle_size[2,type=float] # Wall environment, obstacle size\n\ncombined_obstacle_center_1[2,type=float] # Combined environment, obstacle 1 center\ncombined_obstacle_size_1[2,type=float] # Combined environment, obstacle 1 size\ncombined_obstacle_center_2[2,type=float] # Combined environment, obstacle 2 center\ncombined_obstacle_size_2[2,type=float] # Combined environment, obstacle 2 size\ncombined_obstacle_center_3[2,type=float] # Combined environment, obstacle 3 center\ncombined_obstacle_size_3[2,type=float] # Combined environment, obstacle 3 size\n\n# Agent configurations\nagent1_id[1,type=int] # Agent 1 ID\nagent1_radius[1,type=float] # Agent 1 radius\nagent1_initial_position[2,type=float] # Agent 1 initial position\nagent1_target_position[2,type=float] # Agent 1 target position\n\nagent2_id[1,type=int] # Agent 2 ID\nagent2_radius[1,type=float] # Agent 2 radius\nagent2_initial_position[2,type=float] # Agent 2 initial position\nagent2_target_position[2,type=float] # Agent 2 target position\n\nagent3_id[1,type=int] # Agent 3 ID\nagent3_radius[1,type=float] # Agent 3 radius\nagent3_initial_position[2,type=float] # Agent 3 initial position\nagent3_target_position[2,type=float] # Agent 3 target position\n\nagent4_id[1,type=int] # Agent 4 ID\nagent4_radius[1,type=float] # Agent 4 radius\nagent4_initial_position[2,type=float] # Agent 4 initial position\nagent4_target_position[2,type=float] # Agent 4 target position\n\n# Experiment configurations\nexperiment_seeds[2,type=int] # Random seeds for reproducibility\nresults_dir[1,type=string] # Base directory for results\nanimation_template[1,type=string] # Filename template for animations\ncontrol_vis_filename[1,type=string] # Filename for control visualization\nobstacle_distance_filename[1,type=string] # Filename for obstacle distance plot\npath_uncertainty_filename[1,type=string] # Filename for path uncertainty plot\nconvergence_filename[1,type=string] # Filename for convergence plot",
"Connections": "# Model parameters\ndt > A\n(A, B, C) > state_space_model\n\n# Agent trajectories\n(state_space_model, initial_state_variance, control_variance) > agent_trajectories\n\n# Goal constraints\n(agent_trajectories, goal_constraint_variance) > goal_directed_behavior\n\n# Obstacle avoidance\n(agent_trajectories, gamma, gamma_shape, gamma_scale_factor) > obstacle_avoidance\n\n# Collision avoidance\n(agent_trajectories, nr_agents) > collision_avoidance\n\n# Complete planning system\n(goal_directed_behavior, obstacle_avoidance, collision_avoidance) > planning_system",
"InitialParameterization": "# Model parameters\ndt=1.0\ngamma=1.0\nnr_steps=40\nnr_iterations=350\nnr_agents=4\nsoftmin_temperature=10.0\nintermediate_steps=10\nsave_intermediates=false\n\n# State space matrices\n# A = [1 dt 0 0; 0 1 0 0; 0 0 1 dt; 0 0 0 1]\nA={(1.0, 1.0, 0.0, 0.0), (0.0, 1.0, 0.0, 0.0), (0.0, 0.0, 1.0, 1.0), (0.0, 0.0, 0.0, 1.0)}\n\n# B = [0 0; dt 0; 0 0; 0 dt]\nB={(0.0, 0.0), (1.0, 0.0), (0.0, 0.0), (0.0, 1.0)}\n\n# C = [1 0 0 0; 0 0 1 0]\nC={(1.0, 0.0, 0.0, 0.0), (0.0, 0.0, 1.0, 0.0)}\n\n# Prior distributions\ninitial_state_variance=100.0\ncontrol_variance=0.1\ngoal_constraint_variance=0.00001\ngamma_shape=1.5\ngamma_scale_factor=0.5\n\n# Visualization parameters\nx_limits={(-20, 20)}\ny_limits={(-20, 20)}\nfps=15\nheatmap_resolution=100\nplot_width=800\nplot_height=400\nagent_alpha=1.0\ntarget_alpha=0.2\ncolor_palette=\"tab10\"\n\n# Environment definitions\ndoor_obstacle_center_1={(-40.0, 0.0)}\ndoor_obstacle_size_1={(70.0, 5.0)}\ndoor_obstacle_center_2={(40.0, 0.0)}\ndoor_obstacle_size_2={(70.0, 5.0)}\n\nwall_obstacle_center={(0.0, 0.0)}\nwall_obstacle_size={(10.0, 5.0)}\n\ncombined_obstacle_center_1={(-50.0, 0.0)}\ncombined_obstacle_size_1={(70.0, 2.0)}\ncombined_obstacle_center_2={(50.0, 0.0)}\ncombined_obstacle_size_2={(70.0, 2.0)}\ncombined_obstacle_center_3={(5.0, -1.0)}\ncombined_obstacle_size_3={(3.0, 10.0)}\n\n# Agent configurations\nagent1_id=1\nagent1_radius=2.5\nagent1_initial_position={(-4.0, 10.0)}\nagent1_target_position={(-10.0, -10.0)}\n\nagent2_id=2\nagent2_radius=1.5\nagent2_initial_position={(-10.0, 5.0)}\nagent2_target_position={(10.0, -15.0)}\n\nagent3_id=3\nagent3_radius=1.0\nagent3_initial_position={(-15.0, -10.0)}\nagent3_target_position={(10.0, 10.0)}\n\nagent4_id=4\nagent4_radius=2.5\nagent4_initial_position={(0.0, -10.0)}\nagent4_target_position={(-10.0, 15.0)}\n\n# Experiment configurations\nexperiment_seeds={(42, 123)}\nresults_dir=\"results\"\nanimation_template=\"{environment}_{seed}.gif\"\ncontrol_vis_filename=\"control_signals.gif\"\nobstacle_distance_filename=\"obstacle_distance.png\"\npath_uncertainty_filename=\"path_uncertainty.png\"\nconvergence_filename=\"convergence.png\"",
"InitialParameterization_raw_content": "# Model parameters\ndt=1.0\ngamma=1.0\nnr_steps=40\nnr_iterations=350\nnr_agents=4\nsoftmin_temperature=10.0\nintermediate_steps=10\nsave_intermediates=false\n\n# State space matrices\n# A = [1 dt 0 0; 0 1 0 0; 0 0 1 dt; 0 0 0 1]\nA={(1.0, 1.0, 0.0, 0.0), (0.0, 1.0, 0.0, 0.0), (0.0, 0.0, 1.0, 1.0), (0.0, 0.0, 0.0, 1.0)}\n\n# B = [0 0; dt 0; 0 0; 0 dt]\nB={(0.0, 0.0), (1.0, 0.0), (0.0, 0.0), (0.0, 1.0)}\n\n# C = [1 0 0 0; 0 0 1 0]\nC={(1.0, 0.0, 0.0, 0.0), (0.0, 0.0, 1.0, 0.0)}\n\n# Prior distributions\ninitial_state_variance=100.0\ncontrol_variance=0.1\ngoal_constraint_variance=0.00001\ngamma_shape=1.5\ngamma_scale_factor=0.5\n\n# Visualization parameters\nx_limits={(-20, 20)}\ny_limits={(-20, 20)}\nfps=15\nheatmap_resolution=100\nplot_width=800\nplot_height=400\nagent_alpha=1.0\ntarget_alpha=0.2\ncolor_palette=\"tab10\"\n\n# Environment definitions\ndoor_obstacle_center_1={(-40.0, 0.0)}\ndoor_obstacle_size_1={(70.0, 5.0)}\ndoor_obstacle_center_2={(40.0, 0.0)}\ndoor_obstacle_size_2={(70.0, 5.0)}\n\nwall_obstacle_center={(0.0, 0.0)}\nwall_obstacle_size={(10.0, 5.0)}\n\ncombined_obstacle_center_1={(-50.0, 0.0)}\ncombined_obstacle_size_1={(70.0, 2.0)}\ncombined_obstacle_center_2={(50.0, 0.0)}\ncombined_obstacle_size_2={(70.0, 2.0)}\ncombined_obstacle_center_3={(5.0, -1.0)}\ncombined_obstacle_size_3={(3.0, 10.0)}\n\n# Agent configurations\nagent1_id=1\nagent1_radius=2.5\nagent1_initial_position={(-4.0, 10.0)}\nagent1_target_position={(-10.0, -10.0)}\n\nagent2_id=2\nagent2_radius=1.5\nagent2_initial_position={(-10.0, 5.0)}\nagent2_target_position={(10.0, -15.0)}\n\nagent3_id=3\nagent3_radius=1.0\nagent3_initial_position={(-15.0, -10.0)}\nagent3_target_position={(10.0, 10.0)}\n\nagent4_id=4\nagent4_radius=2.5\nagent4_initial_position={(0.0, -10.0)}\nagent4_target_position={(-10.0, 15.0)}\n\n# Experiment configurations\nexperiment_seeds={(42, 123)}\nresults_dir=\"results\"\nanimation_template=\"{environment}_{seed}.gif\"\ncontrol_vis_filename=\"control_signals.gif\"\nobstacle_distance_filename=\"obstacle_distance.png\"\npath_uncertainty_filename=\"path_uncertainty.png\"\nconvergence_filename=\"convergence.png\"",
"Equations": "# State space model:\n# x_{t+1} = A * x_t + B * u_t + w_t, w_t ~ N(0, control_variance)\n# y_t = C * x_t + v_t, v_t ~ N(0, observation_variance)\n#\n# Obstacle avoidance constraint:\n# p(x_t | obstacle) ~ N(d(x_t, obstacle), gamma)\n# where d(x_t, obstacle) is the distance from position x_t to the nearest obstacle\n#\n# Goal constraint:\n# p(x_T | goal) ~ N(goal, goal_constraint_variance)\n# where x_T is the final position\n#\n# Collision avoidance constraint:\n# p(x_i, x_j) ~ N(||x_i - x_j|| - (r_i + r_j), gamma)\n# where x_i, x_j are positions of agents i and j, r_i, r_j are their radii",
"Time": "Dynamic\nDiscreteTime\nModelTimeHorizon=nr_steps",
"ActInfOntologyAnnotation": "dt=TimeStep\ngamma=ConstraintParameter\nnr_steps=TrajectoryLength\nnr_iterations=InferenceIterations\nnr_agents=NumberOfAgents\nsoftmin_temperature=SoftminTemperature\nA=StateTransitionMatrix\nB=ControlInputMatrix\nC=ObservationMatrix\ninitial_state_variance=InitialStateVariance\ncontrol_variance=ControlVariance\ngoal_constraint_variance=GoalConstraintVariance",
"ModelParameters": "nr_agents=4\nnr_steps=40\nnr_iterations=350",
"Footer": "Multi-agent Trajectory Planning - GNN Representation for RxInfer.jl",
"Signature": "Creator: AI Assistant for GNN\nDate: 2024-07-27\nStatus: Example for RxInfer.jl multi-agent trajectory planning"
},
"other_sections": {},
"gnnsection": {},
"gnnversionandflags": {},
"equations": "# State space model:\n# x_{t+1} = A * x_t + B * u_t + w_t, w_t ~ N(0, control_variance)\n# y_t = C * x_t + v_t, v_t ~ N(0, observation_variance)\n#\n# Obstacle avoidance constraint:\n# p(x_t | obstacle) ~ N(d(x_t, obstacle), gamma)\n# where d(x_t, obstacle) is the distance from position x_t to the nearest obstacle\n#\n# Goal constraint:\n# p(x_T | goal) ~ N(goal, goal_constraint_variance)\n# where x_T is the final position\n#\n# Collision avoidance constraint:\n# p(x_i, x_j) ~ N(||x_i - x_j|| - (r_i + r_j), gamma)\n# where x_i, x_j are positions of agents i and j, r_i, r_j are their radii",
"ModelParameters": {},
"footer": "Multi-agent Trajectory Planning - GNN Representation for RxInfer.jl"
}rxinfer_multiagent_gnn.jsonGNN Model Summary: Multi-agent Trajectory Planning
Source File: /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn/examples/rxinfer_multiagent_gnn.md
Metadata:
description: This model represents a multi-agent trajectory planning scenario in RxInfer.jl.
It includes:
- State space model for agents moving in a 2D environment
- Obstacle avoidance constraints
- Goal-directed behavior
- Inter-agent collision avoidance
The model can be used to simulate trajectory planning in various environments with obstacles.
States (60):
- ID: dt (dimensions=1,type=float, original_id=dt)
- ID: gamma (dimensions=1,type=float, original_id=gamma)
- ID: nr_steps (dimensions=1,type=int, original_id=nr_steps)
- ID: nr_iterations (dimensions=1,type=int, original_id=nr_iterations)
- ID: nr_agents (dimensions=1,type=int, original_id=nr_agents)
- ID: softmin_temperature (dimensions=1,type=float, original_id=softmin_temperature)
- ID: intermediate_steps (dimensions=1,type=int, original_id=intermediate_steps)
- ID: save_intermediates (dimensions=1,type=bool, original_id=save_intermediates)
- ID: A (dimensions=4,4,type=float, original_id=A)
- ID: B (dimensions=4,2,type=float, original_id=B)
- ID: C (dimensions=2,4,type=float, original_id=C)
- ID: initial_state_variance (dimensions=1,type=float, original_id=initial_state_variance)
- ID: control_variance (dimensions=1,type=float, original_id=control_variance)
- ID: goal_constraint_variance (dimensions=1,type=float, original_id=goal_constraint_variance)
- ID: gamma_shape (dimensions=1,type=float, original_id=gamma_shape)
- ID: gamma_scale_factor (dimensions=1,type=float, original_id=gamma_scale_factor)
- ID: x_limits (dimensions=2,type=float, original_id=x_limits)
- ID: y_limits (dimensions=2,type=float, original_id=y_limits)
- ID: fps (dimensions=1,type=int, original_id=fps)
- ID: heatmap_resolution (dimensions=1,type=int, original_id=heatmap_resolution)
- ID: plot_width (dimensions=1,type=int, original_id=plot_width)
- ID: plot_height (dimensions=1,type=int, original_id=plot_height)
- ID: agent_alpha (dimensions=1,type=float, original_id=agent_alpha)
- ID: target_alpha (dimensions=1,type=float, original_id=target_alpha)
- ID: color_palette (dimensions=1,type=string, original_id=color_palette)
- ID: door_obstacle_center_1 (dimensions=2,type=float, original_id=door_obstacle_center_1)
- ID: door_obstacle_size_1 (dimensions=2,type=float, original_id=door_obstacle_size_1)
- ID: door_obstacle_center_2 (dimensions=2,type=float, original_id=door_obstacle_center_2)
- ID: door_obstacle_size_2 (dimensions=2,type=float, original_id=door_obstacle_size_2)
- ID: wall_obstacle_center (dimensions=2,type=float, original_id=wall_obstacle_center)
- ID: wall_obstacle_size (dimensions=2,type=float, original_id=wall_obstacle_size)
- ID: combined_obstacle_center_1 (dimensions=2,type=float, original_id=combined_obstacle_center_1)
- ID: combined_obstacle_size_1 (dimensions=2,type=float, original_id=combined_obstacle_size_1)
- ID: combined_obstacle_center_2 (dimensions=2,type=float, original_id=combined_obstacle_center_2)
- ID: combined_obstacle_size_2 (dimensions=2,type=float, original_id=combined_obstacle_size_2)
- ID: combined_obstacle_center_3 (dimensions=2,type=float, original_id=combined_obstacle_center_3)
- ID: combined_obstacle_size_3 (dimensions=2,type=float, original_id=combined_obstacle_size_3)
- ID: agent1_id (dimensions=1,type=int, original_id=agent1_id)
- ID: agent1_radius (dimensions=1,type=float, original_id=agent1_radius)
- ID: agent1_initial_position (dimensions=2,type=float, original_id=agent1_initial_position)
- ID: agent1_target_position (dimensions=2,type=float, original_id=agent1_target_position)
- ID: agent2_id (dimensions=1,type=int, original_id=agent2_id)
- ID: agent2_radius (dimensions=1,type=float, original_id=agent2_radius)
- ID: agent2_initial_position (dimensions=2,type=float, original_id=agent2_initial_position)
- ID: agent2_target_position (dimensions=2,type=float, original_id=agent2_target_position)
- ID: agent3_id (dimensions=1,type=int, original_id=agent3_id)
- ID: agent3_radius (dimensions=1,type=float, original_id=agent3_radius)
- ID: agent3_initial_position (dimensions=2,type=float, original_id=agent3_initial_position)
- ID: agent3_target_position (dimensions=2,type=float, original_id=agent3_target_position)
- ID: agent4_id (dimensions=1,type=int, original_id=agent4_id)
- ID: agent4_radius (dimensions=1,type=float, original_id=agent4_radius)
- ID: agent4_initial_position (dimensions=2,type=float, original_id=agent4_initial_position)
- ID: agent4_target_position (dimensions=2,type=float, original_id=agent4_target_position)
- ID: experiment_seeds (dimensions=2,type=int, original_id=experiment_seeds)
- ID: results_dir (dimensions=1,type=string, original_id=results_dir)
- ID: animation_template (dimensions=1,type=string, original_id=animation_template)
- ID: control_vis_filename (dimensions=1,type=string, original_id=control_vis_filename)
- ID: obstacle_distance_filename (dimensions=1,type=string, original_id=obstacle_distance_filename)
- ID: path_uncertainty_filename (dimensions=1,type=string, original_id=path_uncertainty_filename)
- ID: convergence_filename (dimensions=1,type=string, original_id=convergence_filename)
Initial Parameters (0):
General Parameters (0):
Observations (0):
Transitions (7):
- None -> None
- None -> None
- None -> None
- None -> None
- None -> None
- None -> None
- None -> None
Ontology Annotations (12):
dt = TimeStep
gamma = ConstraintParameter
nr_steps = TrajectoryLength
nr_iterations = InferenceIterations
nr_agents = NumberOfAgents
softmin_temperature = SoftminTemperature
A = StateTransitionMatrix
B = ControlInputMatrix
C = ObservationMatrix
initial_state_variance = InitialStateVariance
... (file truncated, total lines: 103)rxinfer_multiagent_gnn.txt🗓️ Generated: 2025-06-07 08:30:47
Found 2 GNN files for processing:
src/gnn/examples/pymdp_pomdp_agent.mdsrc/gnn/examples/rxinfer_multiagent_gnn.mdPipeline execution data not available.
gnn_processing_step/gnn_type_check/gnn_exports/gnn_examples_visualization/gnn_rendered_simulators/test_reports/--verbose flag for detailed debuggingReport generated by GNN Processing Pipeline Step 5 (Export)
MultifactorPyMDPAgent
GNN v1
Multifactor PyMDP Agent v1
This model represents a PyMDP agent with multiple observation modalities and hidden state factors. - Observation modalities: "state_observation" (3 outcomes), "reward" (3 outcomes), "decision_proprioceptive" (3 outcomes) - Hidden state factors: "reward_level" (2 states), "decision_state" (3 states) - Control: "decision_state" factor is controllable with 3 possible actions. The parameterization is derived from a PyMDP Python script example.
A_m0[3,2,3,type=float] # Likelihood for modality 0 ("state_observation") A_m1[3,2,3,type=float] # Likelihood for modality 1 ("reward") A_m2[3,2,3,type=float] # Likelihood for modality 2 ("decision_proprioceptive")
B_f0[2,2,1,type=float] # Transitions for factor 0 ("reward_level"), 1 implicit action (uncontrolled) B_f1[3,3,3,type=float] # Transitions for factor 1 ("decision_state"), 3 actions
C_m0[3,type=float] # Preferences for modality 0 C_m1[3,type=float] # Preferences for modality 1 C_m2[3,type=float] # Preferences for modality 2
D_f0[2,type=float] # Prior for factor 0 D_f1[3,type=float] # Prior for factor 1
s_f0[2,1,type=float] # Hidden state for factor 0 ("reward_level") s_f1[3,1,type=float] # Hidden state for factor 1 ("decision_state") s_prime_f0[2,1,type=float] # Next hidden state for factor 0 s_prime_f1[3,1,type=float] # Next hidden state for factor 1
o_m0[3,1,type=float] # Observation for modality 0 o_m1[3,1,type=float] # Observation for modality 1 o_m2[3,1,type=float] # Observation for modality 2
π_f1[3,type=float] # Policy (distribution over actions) for controllable factor 1 u_f1[1,type=int] # Action taken for controllable factor 1 G[1,type=float] # Expected Free Energy (overall, or can be per policy) t[1,type=int] # Time step
(D_f0,D_f1)-(s_f0,s_f1) (s_f0,s_f1)-(A_m0,A_m1,A_m2) (A_m0,A_m1,A_m2)-(o_m0,o_m1,o_m2) (s_f0,s_f1,u_f1)-(B_f0,B_f1) # u_f1 primarily affects B_f1; B_f0 is uncontrolled (B_f0,B_f1)-(s_prime_f0,s_prime_f1) (C_m0,C_m1,C_m2)>G G>π_f1 π_f1-u_f1 G=ExpectedFreeEnergy t=Time
A_m0={ ( (0.33333,0.33333,0.8), (0.33333,0.33333,0.2) ), # obs=0; (vals for s_f1 over s_f0=0), (vals for s_f1 over s_f0=1) ( (0.33333,0.33333,0.0), (0.33333,0.33333,0.0) ), # obs=1 ( (0.33333,0.33333,0.2), (0.33333,0.33333,0.8) ) # obs=2 }
A_m1={ ( (0.0,0.731,0.0), (0.0,0.269,0.0) ), # obs=0 ( (0.0,0.269,0.0), (0.0,0.731,0.0) ), # obs=1 ( (1.0,0.0,1.0), (1.0,0.0,1.0) ) # obs=2 }
A_m2={ ( (1.0,0.0,0.0), (1.0,0.0,0.0) ), # obs=0 ( (0.0,1.0,0.0), (0.0,1.0,0.0) ), # obs=1 ( (0.0,0.0,1.0), (0.0,0.0,1.0) ) # obs=2 }
B_f0={ ( (1.0),(0.0) ), # s_next=0; (vals for s_prev over action=0) ( (0.0),(1.0) ) # s_next=1 }
B_f1={ ( (1.0,1.0,1.0), (0.0,0.0,0.0), (0.0,0.0,0.0) ), # s_next=0; (vals for actions over s_prev=0), (vals for actions over s_prev=1), ... ( (0.0,0.0,0.0), (1.0,1.0,1.0), (0.0,0.0,0.0) ), # s_next=1 ( (0.0,0.0,0.0), (0.0,0.0,0.0), (1.0,1.0,1.0) ) # s_next=2 }
C_m0={(0.0,0.0,0.0)}
C_m1={(1.0,-2.0,0.0)}
C_m2={(0.0,0.0,0.0)}
D_f0={(0.5,0.5)}
D_f1={(0.33333,0.33333,0.33333)}
Dynamic DiscreteTime=t ModelTimeHorizon=Unbounded # Agent definition is generally unbounded, specific simulation runs have a horizon.
A_m0=LikelihoodMatrixModality0 A_m1=LikelihoodMatrixModality1 A_m2=LikelihoodMatrixModality2 B_f0=TransitionMatrixFactor0 B_f1=TransitionMatrixFactor1 C_m0=LogPreferenceVectorModality0 C_m1=LogPreferenceVectorModality1 C_m2=LogPreferenceVectorModality2 D_f0=PriorOverHiddenStatesFactor0 D_f1=PriorOverHiddenStatesFactor1 s_f0=HiddenStateFactor0 s_f1=HiddenStateFactor1 s_prime_f0=NextHiddenStateFactor0 s_prime_f1=NextHiddenStateFactor1 o_m0=ObservationModality0 o_m1=ObservationModality1 o_m2=ObservationModality2 π_f1=PolicyVectorFactor1 # Distribution over actions for factor 1 u_f1=ActionFactor1 # Chosen action for factor 1 G=ExpectedFreeEnergy
num_hidden_states_factors: [2, 3] # s_f0[2], s_f1[3] num_obs_modalities: [3, 3, 3] # o_m0[3], o_m1[3], o_m2[3] num_control_factors: [1, 3] # B_f0 actions_dim=1 (uncontrolled), B_f1 actions_dim=3 (controlled by pi_f1)
Multifactor PyMDP Agent v1 - GNN Representation
NA \n```\n\n## Parsed Sections
# GNN Example: Multifactor PyMDP Agent
# Format: Markdown representation of a Multifactor PyMDP model in Active Inference format
# Version: 1.0
# This file is machine-readable and attempts to represent a PyMDP agent with multiple observation modalities and hidden state factors.
Multifactor PyMDP Agent v1
MultifactorPyMDPAgent
GNN v1
This model represents a PyMDP agent with multiple observation modalities and hidden state factors.
- Observation modalities: "state_observation" (3 outcomes), "reward" (3 outcomes), "decision_proprioceptive" (3 outcomes)
- Hidden state factors: "reward_level" (2 states), "decision_state" (3 states)
- Control: "decision_state" factor is controllable with 3 possible actions.
The parameterization is derived from a PyMDP Python script example.
# A_matrices are defined per modality: A_m[observation_outcomes, state_factor0_states, state_factor1_states]
A_m0[3,2,3,type=float] # Likelihood for modality 0 ("state_observation")
A_m1[3,2,3,type=float] # Likelihood for modality 1 ("reward")
A_m2[3,2,3,type=float] # Likelihood for modality 2 ("decision_proprioceptive")
# B_matrices are defined per hidden state factor: B_f[states_next, states_previous, actions]
B_f0[2,2,1,type=float] # Transitions for factor 0 ("reward_level"), 1 implicit action (uncontrolled)
B_f1[3,3,3,type=float] # Transitions for factor 1 ("decision_state"), 3 actions
# C_vectors are defined per modality: C_m[observation_outcomes]
C_m0[3,type=float] # Preferences for modality 0
C_m1[3,type=float] # Preferences for modality 1
C_m2[3,type=float] # Preferences for modality 2
# D_vectors are defined per hidden state factor: D_f[states]
D_f0[2,type=float] # Prior for factor 0
D_f1[3,type=float] # Prior for factor 1
# Hidden States
s_f0[2,1,type=float] # Hidden state for factor 0 ("reward_level")
s_f1[3,1,type=float] # Hidden state for factor 1 ("decision_state")
s_prime_f0[2,1,type=float] # Next hidden state for factor 0
s_prime_f1[3,1,type=float] # Next hidden state for factor 1
# Observations
o_m0[3,1,type=float] # Observation for modality 0
o_m1[3,1,type=float] # Observation for modality 1
o_m2[3,1,type=float] # Observation for modality 2
# Policy and Control
π_f1[3,type=float] # Policy (distribution over actions) for controllable factor 1
u_f1[1,type=int] # Action taken for controllable factor 1
G[1,type=float] # Expected Free Energy (overall, or can be per policy)
t[1,type=int] # Time step
(D_f0,D_f1)-(s_f0,s_f1)
(s_f0,s_f1)-(A_m0,A_m1,A_m2)
(A_m0,A_m1,A_m2)-(o_m0,o_m1,o_m2)
(s_f0,s_f1,u_f1)-(B_f0,B_f1) # u_f1 primarily affects B_f1; B_f0 is uncontrolled
(B_f0,B_f1)-(s_prime_f0,s_prime_f1)
(C_m0,C_m1,C_m2)>G
G>π_f1
π_f1-u_f1
G=ExpectedFreeEnergy
t=Time
# A_m0: num_obs[0]=3, num_states[0]=2, num_states[1]=3. Format: A[obs_idx][state_f0_idx][state_f1_idx]
# A[0][:, :, 0] = np.ones((3,2))/3
# A[0][:, :, 1] = np.ones((3,2))/3
# A[0][:, :, 2] = [[0.8,0.2],[0.0,0.0],[0.2,0.8]] (obs x state_f0 for state_f1=2)
A_m0={
( (0.33333,0.33333,0.8), (0.33333,0.33333,0.2) ), # obs=0; (vals for s_f1 over s_f0=0), (vals for s_f1 over s_f0=1)
( (0.33333,0.33333,0.0), (0.33333,0.33333,0.0) ), # obs=1
( (0.33333,0.33333,0.2), (0.33333,0.33333,0.8) ) # obs=2
}
# A_m1: num_obs[1]=3, num_states[0]=2, num_states[1]=3
# A[1][2, :, 0] = [1.0,1.0]
# A[1][0:2, :, 1] = softmax([[1,0],[0,1]]) approx [[0.731,0.269],[0.269,0.731]]
# A[1][2, :, 2] = [1.0,1.0]
# Others are 0.
A_m1={
( (0.0,0.731,0.0), (0.0,0.269,0.0) ), # obs=0
( (0.0,0.269,0.0), (0.0,0.731,0.0) ), # obs=1
( (1.0,0.0,1.0), (1.0,0.0,1.0) ) # obs=2
}
# A_m2: num_obs[2]=3, num_states[0]=2, num_states[1]=3
# A[2][0,:,0]=1.0; A[2][1,:,1]=1.0; A[2][2,:,2]=1.0
# Others are 0.
A_m2={
( (1.0,0.0,0.0), (1.0,0.0,0.0) ), # obs=0
( (0.0,1.0,0.0), (0.0,1.0,0.0) ), # obs=1
( (0.0,0.0,1.0), (0.0,0.0,1.0) ) # obs=2
}
# B_f0: factor 0 (2 states), uncontrolled (1 action). Format B[s_next, s_prev, action=0]
# B_f0 = eye(2)
B_f0={
( (1.0),(0.0) ), # s_next=0; (vals for s_prev over action=0)
( (0.0),(1.0) ) # s_next=1
}
# B_f1: factor 1 (3 states), 3 actions. Format B[s_next, s_prev, action_idx]
# B_f1[:,:,action_idx] = eye(3) for each action
B_f1={
( (1.0,1.0,1.0), (0.0,0.0,0.0), (0.0,0.0,0.0) ), # s_next=0; (vals for actions over s_prev=0), (vals for actions over s_prev=1), ...
( (0.0,0.0,0.0), (1.0,1.0,1.0), (0.0,0.0,0.0) ), # s_next=1
( (0.0,0.0,0.0), (0.0,0.0,0.0), (1.0,1.0,1.0) ) # s_next=2
}
# C_m0: num_obs[0]=3. Defaults to zeros.
C_m0={(0.0,0.0,0.0)}
# C_m1: num_obs[1]=3. C[1][0]=1.0, C[1][1]=-2.0
C_m1={(1.0,-2.0,0.0)}
# C_m2: num_obs[2]=3. Defaults to zeros.
C_m2={(0.0,0.0,0.0)}
# D_f0: factor 0 (2 states). Uniform prior.
D_f0={(0.5,0.5)}
# D_f1: factor 1 (3 states). Uniform prior.
D_f1={(0.33333,0.33333,0.33333)}
# Standard PyMDP agent equations for state inference (infer_states),
# policy inference (infer_policies), and action sampling (sample_action).
# qs = infer_states(o)
# q_pi, efe = infer_policies()
# action = sample_action()
Dynamic
DiscreteTime=t
ModelTimeHorizon=Unbounded # Agent definition is generally unbounded, specific simulation runs have a horizon.
A_m0=LikelihoodMatrixModality0
A_m1=LikelihoodMatrixModality1
A_m2=LikelihoodMatrixModality2
B_f0=TransitionMatrixFactor0
B_f1=TransitionMatrixFactor1
C_m0=LogPreferenceVectorModality0
C_m1=LogPreferenceVectorModality1
C_m2=LogPreferenceVectorModality2
D_f0=PriorOverHiddenStatesFactor0
D_f1=PriorOverHiddenStatesFactor1
s_f0=HiddenStateFactor0
s_f1=HiddenStateFactor1
s_prime_f0=NextHiddenStateFactor0
s_prime_f1=NextHiddenStateFactor1
o_m0=ObservationModality0
o_m1=ObservationModality1
o_m2=ObservationModality2
π_f1=PolicyVectorFactor1 # Distribution over actions for factor 1
u_f1=ActionFactor1 # Chosen action for factor 1
G=ExpectedFreeEnergy
num_hidden_states_factors: [2, 3] # s_f0[2], s_f1[3]
num_obs_modalities: [3, 3, 3] # o_m0[3], o_m1[3], o_m2[3]
num_control_factors: [1, 3] # B_f0 actions_dim=1 (uncontrolled), B_f1 actions_dim=3 (controlled by pi_f1)
Multifactor PyMDP Agent v1 - GNN Representation
NA
{
"_HeaderComments": "# GNN Example: Multifactor PyMDP Agent\n# Format: Markdown representation of a Multifactor PyMDP model in Active Inference format\n# Version: 1.0\n# This file is machine-readable and attempts to represent a PyMDP agent with multiple observation modalities and hidden state factors.",
"ModelName": "Multifactor PyMDP Agent v1",
"GNNSection": "MultifactorPyMDPAgent",
"GNNVersionAndFlags": "GNN v1",
"ModelAnnotation": "This model represents a PyMDP agent with multiple observation modalities and hidden state factors.\n- Observation modalities: \"state_observation\" (3 outcomes), \"reward\" (3 outcomes), \"decision_proprioceptive\" (3 outcomes)\n- Hidden state factors: \"reward_level\" (2 states), \"decision_state\" (3 states)\n- Control: \"decision_state\" factor is controllable with 3 possible actions.\nThe parameterization is derived from a PyMDP Python script example.",
"StateSpaceBlock": "# A_matrices are defined per modality: A_m[observation_outcomes, state_factor0_states, state_factor1_states]\nA_m0[3,2,3,type=float] # Likelihood for modality 0 (\"state_observation\")\nA_m1[3,2,3,type=float] # Likelihood for modality 1 (\"reward\")\nA_m2[3,2,3,type=float] # Likelihood for modality 2 (\"decision_proprioceptive\")\n\n# B_matrices are defined per hidden state factor: B_f[states_next, states_previous, actions]\nB_f0[2,2,1,type=float] # Transitions for factor 0 (\"reward_level\"), 1 implicit action (uncontrolled)\nB_f1[3,3,3,type=float] # Transitions for factor 1 (\"decision_state\"), 3 actions\n\n# C_vectors are defined per modality: C_m[observation_outcomes]\nC_m0[3,type=float] # Preferences for modality 0\nC_m1[3,type=float] # Preferences for modality 1\nC_m2[3,type=float] # Preferences for modality 2\n\n# D_vectors are defined per hidden state factor: D_f[states]\nD_f0[2,type=float] # Prior for factor 0\nD_f1[3,type=float] # Prior for factor 1\n\n# Hidden States\ns_f0[2,1,type=float] # Hidden state for factor 0 (\"reward_level\")\ns_f1[3,1,type=float] # Hidden state for factor 1 (\"decision_state\")\ns_prime_f0[2,1,type=float] # Next hidden state for factor 0\ns_prime_f1[3,1,type=float] # Next hidden state for factor 1\n\n# Observations\no_m0[3,1,type=float] # Observation for modality 0\no_m1[3,1,type=float] # Observation for modality 1\no_m2[3,1,type=float] # Observation for modality 2\n\n# Policy and Control\n\u03c0_f1[3,type=float] # Policy (distribution over actions) for controllable factor 1\nu_f1[1,type=int] # Action taken for controllable factor 1\nG[1,type=float] # Expected Free Energy (overall, or can be per policy)\nt[1,type=int] # Time step",
"Connections": "(D_f0,D_f1)-(s_f0,s_f1)\n(s_f0,s_f1)-(A_m0,A_m1,A_m2)\n(A_m0,A_m1,A_m2)-(o_m0,o_m1,o_m2)\n(s_f0,s_f1,u_f1)-(B_f0,B_f1) # u_f1 primarily affects B_f1; B_f0 is uncontrolled\n(B_f0,B_f1)-(s_prime_f0,s_prime_f1)\n(C_m0,C_m1,C_m2)>G\nG>\u03c0_f1\n\u03c0_f1-u_f1\nG=ExpectedFreeEnergy\nt=Time",
"InitialParameterization": "# A_m0: num_obs[0]=3, num_states[0]=2, num_states[1]=3. Format: A[obs_idx][state_f0_idx][state_f1_idx]\n# A[0][:, :, 0] = np.ones((3,2))/3\n# A[0][:, :, 1] = np.ones((3,2))/3\n# A[0][:, :, 2] = [[0.8,0.2],[0.0,0.0],[0.2,0.8]] (obs x state_f0 for state_f1=2)\nA_m0={\n ( (0.33333,0.33333,0.8), (0.33333,0.33333,0.2) ), # obs=0; (vals for s_f1 over s_f0=0), (vals for s_f1 over s_f0=1)\n ( (0.33333,0.33333,0.0), (0.33333,0.33333,0.0) ), # obs=1\n ( (0.33333,0.33333,0.2), (0.33333,0.33333,0.8) ) # obs=2\n}\n\n# A_m1: num_obs[1]=3, num_states[0]=2, num_states[1]=3\n# A[1][2, :, 0] = [1.0,1.0]\n# A[1][0:2, :, 1] = softmax([[1,0],[0,1]]) approx [[0.731,0.269],[0.269,0.731]]\n# A[1][2, :, 2] = [1.0,1.0]\n# Others are 0.\nA_m1={\n ( (0.0,0.731,0.0), (0.0,0.269,0.0) ), # obs=0\n ( (0.0,0.269,0.0), (0.0,0.731,0.0) ), # obs=1\n ( (1.0,0.0,1.0), (1.0,0.0,1.0) ) # obs=2\n}\n\n# A_m2: num_obs[2]=3, num_states[0]=2, num_states[1]=3\n# A[2][0,:,0]=1.0; A[2][1,:,1]=1.0; A[2][2,:,2]=1.0\n# Others are 0.\nA_m2={\n ( (1.0,0.0,0.0), (1.0,0.0,0.0) ), # obs=0\n ( (0.0,1.0,0.0), (0.0,1.0,0.0) ), # obs=1\n ( (0.0,0.0,1.0), (0.0,0.0,1.0) ) # obs=2\n}\n\n# B_f0: factor 0 (2 states), uncontrolled (1 action). Format B[s_next, s_prev, action=0]\n# B_f0 = eye(2)\nB_f0={\n ( (1.0),(0.0) ), # s_next=0; (vals for s_prev over action=0)\n ( (0.0),(1.0) ) # s_next=1\n}\n\n# B_f1: factor 1 (3 states), 3 actions. Format B[s_next, s_prev, action_idx]\n# B_f1[:,:,action_idx] = eye(3) for each action\nB_f1={\n ( (1.0,1.0,1.0), (0.0,0.0,0.0), (0.0,0.0,0.0) ), # s_next=0; (vals for actions over s_prev=0), (vals for actions over s_prev=1), ...\n ( (0.0,0.0,0.0), (1.0,1.0,1.0), (0.0,0.0,0.0) ), # s_next=1\n ( (0.0,0.0,0.0), (0.0,0.0,0.0), (1.0,1.0,1.0) ) # s_next=2\n}\n\n# C_m0: num_obs[0]=3. Defaults to zeros.\nC_m0={(0.0,0.0,0.0)}\n\n# C_m1: num_obs[1]=3. C[1][0]=1.0, C[1][1]=-2.0\nC_m1={(1.0,-2.0,0.0)}\n\n# C_m2: num_obs[2]=3. Defaults to zeros.\nC_m2={(0.0,0.0,0.0)}\n\n# D_f0: factor 0 (2 states). Uniform prior.\nD_f0={(0.5,0.5)}\n\n# D_f1: factor 1 (3 states). Uniform prior.\nD_f1={(0.33333,0.33333,0.33333)}",
"Equations": "# Standard PyMDP agent equations for state inference (infer_states),\n# policy inference (infer_policies), and action sampling (sample_action).\n# qs = infer_states(o)\n# q_pi, efe = infer_policies()\n# action = sample_action()",
"Time": "Dynamic\nDiscreteTime=t\nModelTimeHorizon=Unbounded # Agent definition is generally unbounded, specific simulation runs have a horizon.",
"ActInfOntologyAnnotation": "A_m0=LikelihoodMatrixModality0\nA_m1=LikelihoodMatrixModality1\nA_m2=LikelihoodMatrixModality2\nB_f0=TransitionMatrixFactor0\nB_f1=TransitionMatrixFactor1\nC_m0=LogPreferenceVectorModality0\nC_m1=LogPreferenceVectorModality1\nC_m2=LogPreferenceVectorModality2\nD_f0=PriorOverHiddenStatesFactor0\nD_f1=PriorOverHiddenStatesFactor1\ns_f0=HiddenStateFactor0\ns_f1=HiddenStateFactor1\ns_prime_f0=NextHiddenStateFactor0\ns_prime_f1=NextHiddenStateFactor1\no_m0=ObservationModality0\no_m1=ObservationModality1\no_m2=ObservationModality2\n\u03c0_f1=PolicyVectorFactor1 # Distribution over actions for factor 1\nu_f1=ActionFactor1 # Chosen action for factor 1\nG=ExpectedFreeEnergy",
"ModelParameters": "num_hidden_states_factors: [2, 3] # s_f0[2], s_f1[3]\nnum_obs_modalities: [3, 3, 3] # o_m0[3], o_m1[3], o_m2[3]\nnum_control_factors: [1, 3] # B_f0 actions_dim=1 (uncontrolled), B_f1 actions_dim=3 (controlled by pi_f1)",
"Footer": "Multifactor PyMDP Agent v1 - GNN Representation",
"Signature": "NA"
}full_model_data.json{
"ModelName": "Multifactor PyMDP Agent v1",
"ModelAnnotation": "This model represents a PyMDP agent with multiple observation modalities and hidden state factors.\n- Observation modalities: \"state_observation\" (3 outcomes), \"reward\" (3 outcomes), \"decision_proprioceptive\" (3 outcomes)\n- Hidden state factors: \"reward_level\" (2 states), \"decision_state\" (3 states)\n- Control: \"decision_state\" factor is controllable with 3 possible actions.\nThe parameterization is derived from a PyMDP Python script example.",
"GNNVersionAndFlags": "GNN v1",
"Time": "Dynamic\nDiscreteTime=t\nModelTimeHorizon=Unbounded # Agent definition is generally unbounded, specific simulation runs have a horizon.",
"ActInfOntologyAnnotation": "A_m0=LikelihoodMatrixModality0\nA_m1=LikelihoodMatrixModality1\nA_m2=LikelihoodMatrixModality2\nB_f0=TransitionMatrixFactor0\nB_f1=TransitionMatrixFactor1\nC_m0=LogPreferenceVectorModality0\nC_m1=LogPreferenceVectorModality1\nC_m2=LogPreferenceVectorModality2\nD_f0=PriorOverHiddenStatesFactor0\nD_f1=PriorOverHiddenStatesFactor1\ns_f0=HiddenStateFactor0\ns_f1=HiddenStateFactor1\ns_prime_f0=NextHiddenStateFactor0\ns_prime_f1=NextHiddenStateFactor1\no_m0=ObservationModality0\no_m1=ObservationModality1\no_m2=ObservationModality2\n\u03c0_f1=PolicyVectorFactor1 # Distribution over actions for factor 1\nu_f1=ActionFactor1 # Chosen action for factor 1\nG=ExpectedFreeEnergy"
}model_metadata.jsonRxInferMultiAgentTrajectoryPlanning
GNN v1
Multi-agent Trajectory Planning
This model represents a multi-agent trajectory planning scenario in RxInfer.jl. It includes: - State space model for agents moving in a 2D environment - Obstacle avoidance constraints - Goal-directed behavior - Inter-agent collision avoidance The model can be used to simulate trajectory planning in various environments with obstacles.
dt[1,type=float] # Time step for the state space model gamma[1,type=float] # Constraint parameter for the Halfspace node nr_steps[1,type=int] # Number of time steps in the trajectory nr_iterations[1,type=int] # Number of inference iterations nr_agents[1,type=int] # Number of agents in the simulation softmin_temperature[1,type=float] # Temperature parameter for the softmin function intermediate_steps[1,type=int] # Intermediate results saving interval save_intermediates[1,type=bool] # Whether to save intermediate results
A[4,4,type=float] # State transition matrix B[4,2,type=float] # Control input matrix C[2,4,type=float] # Observation matrix
initial_state_variance[1,type=float] # Prior on initial state control_variance[1,type=float] # Prior on control inputs goal_constraint_variance[1,type=float] # Goal constraints variance gamma_shape[1,type=float] # Parameters for GammaShapeRate prior gamma_scale_factor[1,type=float] # Parameters for GammaShapeRate prior
x_limits[2,type=float] # Plot boundaries (x-axis) y_limits[2,type=float] # Plot boundaries (y-axis) fps[1,type=int] # Animation frames per second heatmap_resolution[1,type=int] # Heatmap resolution plot_width[1,type=int] # Plot width plot_height[1,type=int] # Plot height agent_alpha[1,type=float] # Visualization alpha for agents target_alpha[1,type=float] # Visualization alpha for targets color_palette[1,type=string] # Color palette for visualization
door_obstacle_center_1[2,type=float] # Door environment, obstacle 1 center door_obstacle_size_1[2,type=float] # Door environment, obstacle 1 size door_obstacle_center_2[2,type=float] # Door environment, obstacle 2 center door_obstacle_size_2[2,type=float] # Door environment, obstacle 2 size
wall_obstacle_center[2,type=float] # Wall environment, obstacle center wall_obstacle_size[2,type=float] # Wall environment, obstacle size
combined_obstacle_center_1[2,type=float] # Combined environment, obstacle 1 center combined_obstacle_size_1[2,type=float] # Combined environment, obstacle 1 size combined_obstacle_center_2[2,type=float] # Combined environment, obstacle 2 center combined_obstacle_size_2[2,type=float] # Combined environment, obstacle 2 size combined_obstacle_center_3[2,type=float] # Combined environment, obstacle 3 center combined_obstacle_size_3[2,type=float] # Combined environment, obstacle 3 size
agent1_id[1,type=int] # Agent 1 ID agent1_radius[1,type=float] # Agent 1 radius agent1_initial_position[2,type=float] # Agent 1 initial position agent1_target_position[2,type=float] # Agent 1 target position
agent2_id[1,type=int] # Agent 2 ID agent2_radius[1,type=float] # Agent 2 radius agent2_initial_position[2,type=float] # Agent 2 initial position agent2_target_position[2,type=float] # Agent 2 target position
agent3_id[1,type=int] # Agent 3 ID agent3_radius[1,type=float] # Agent 3 radius agent3_initial_position[2,type=float] # Agent 3 initial position agent3_target_position[2,type=float] # Agent 3 target position
agent4_id[1,type=int] # Agent 4 ID agent4_radius[1,type=float] # Agent 4 radius agent4_initial_position[2,type=float] # Agent 4 initial position agent4_target_position[2,type=float] # Agent 4 target position
experiment_seeds[2,type=int] # Random seeds for reproducibility results_dir[1,type=string] # Base directory for results animation_template[1,type=string] # Filename template for animations control_vis_filename[1,type=string] # Filename for control visualization obstacle_distance_filename[1,type=string] # Filename for obstacle distance plot path_uncertainty_filename[1,type=string] # Filename for path uncertainty plot convergence_filename[1,type=string] # Filename for convergence plot
dt > A (A, B, C) > state_space_model
(state_space_model, initial_state_variance, control_variance) > agent_trajectories
(agent_trajectories, goal_constraint_variance) > goal_directed_behavior
(agent_trajectories, gamma, gamma_shape, gamma_scale_factor) > obstacle_avoidance
(agent_trajectories, nr_agents) > collision_avoidance
(goal_directed_behavior, obstacle_avoidance, collision_avoidance) > planning_system
dt=1.0 gamma=1.0 nr_steps=40 nr_iterations=350 nr_agents=4 softmin_temperature=10.0 intermediate_steps=10 save_intermediates=false
A={(1.0, 1.0, 0.0, 0.0), (0.0, 1.0, 0.0, 0.0), (0.0, 0.0, 1.0, 1.0), (0.0, 0.0, 0.0, 1.0)}
B={(0.0, 0.0), (1.0, 0.0), (0.0, 0.0), (0.0, 1.0)}
C={(1.0, 0.0, 0.0, 0.0), (0.0, 0.0, 1.0, 0.0)}
initial_state_variance=100.0 control_variance=0.1 goal_constraint_variance=0.00001 gamma_shape=1.5 gamma_scale_factor=0.5
x_limits={(-20, 20)} y_limits={(-20, 20)} fps=15 heatmap_resolution=100 plot_width=800 plot_height=400 agent_alpha=1.0 target_alpha=0.2 color_palette="tab10"
door_obstacle_center_1={(-40.0, 0.0)} door_obstacle_size_1={(70.0, 5.0)} door_obstacle_center_2={(40.0, 0.0)} door_obstacle_size_2={(70.0, 5.0)}
wall_obstacle_center={(0.0, 0.0)} wall_obstacle_size={(10.0, 5.0)}
combined_obstacle_center_1={(-50.0, 0.0)} combined_obstacle_size_1={(70.0, 2.0)} combined_obstacle_center_2={(50.0, 0.0)} combined_obstacle_size_2={(70.0, 2.0)} combined_obstacle_center_3={(5.0, -1.0)} combined_obstacle_size_3={(3.0, 10.0)}
agent1_id=1 agent1_radius=2.5 agent1_initial_position={(-4.0, 10.0)} agent1_target_position={(-10.0, -10.0)}
agent2_id=2 agent2_radius=1.5 agent2_initial_position={(-10.0, 5.0)} agent2_target_position={(10.0, -15.0)}
agent3_id=3 agent3_radius=1.0 agent3_initial_position={(-15.0, -10.0)} agent3_target_position={(10.0, 10.0)}
agent4_id=4 agent4_radius=2.5 agent4_initial_position={(0.0, -10.0)} agent4_target_position={(-10.0, 15.0)}
experiment_seeds={(42, 123)} results_dir="results" animation_template="{environment}_{seed}.gif" control_vis_filename="control_signals.gif" obstacle_distance_filename="obstacle_distance.png" path_uncertainty_filename="path_uncertainty.png" convergence_filename="convergence.png"
Dynamic DiscreteTime ModelTimeHorizon=nr_steps
dt=TimeStep gamma=ConstraintParameter nr_steps=TrajectoryLength nr_iterations=InferenceIterations nr_agents=NumberOfAgents softmin_temperature=SoftminTemperature A=StateTransitionMatrix B=ControlInputMatrix C=ObservationMatrix initial_state_variance=InitialStateVariance control_variance=ControlVariance goal_constraint_variance=GoalConstraintVariance
nr_agents=4 nr_steps=40 nr_iterations=350
Multi-agent Trajectory Planning - GNN Representation for RxInfer.jl
Creator: AI Assistant for GNN Date: 2024-07-27 Status: Example for RxInfer.jl multi-agent trajectory planning \n```\n\n## Parsed Sections
# GNN Example: RxInfer Multi-agent Trajectory Planning
# Format: Markdown representation of a Multi-agent Trajectory Planning model for RxInfer.jl
# Version: 1.0
# This file is machine-readable and represents the configuration for the RxInfer.jl multi-agent trajectory planning example.
Multi-agent Trajectory Planning
RxInferMultiAgentTrajectoryPlanning
GNN v1
This model represents a multi-agent trajectory planning scenario in RxInfer.jl.
It includes:
- State space model for agents moving in a 2D environment
- Obstacle avoidance constraints
- Goal-directed behavior
- Inter-agent collision avoidance
The model can be used to simulate trajectory planning in various environments with obstacles.
# Model parameters
dt[1,type=float] # Time step for the state space model
gamma[1,type=float] # Constraint parameter for the Halfspace node
nr_steps[1,type=int] # Number of time steps in the trajectory
nr_iterations[1,type=int] # Number of inference iterations
nr_agents[1,type=int] # Number of agents in the simulation
softmin_temperature[1,type=float] # Temperature parameter for the softmin function
intermediate_steps[1,type=int] # Intermediate results saving interval
save_intermediates[1,type=bool] # Whether to save intermediate results
# State space matrices
A[4,4,type=float] # State transition matrix
B[4,2,type=float] # Control input matrix
C[2,4,type=float] # Observation matrix
# Prior distributions
initial_state_variance[1,type=float] # Prior on initial state
control_variance[1,type=float] # Prior on control inputs
goal_constraint_variance[1,type=float] # Goal constraints variance
gamma_shape[1,type=float] # Parameters for GammaShapeRate prior
gamma_scale_factor[1,type=float] # Parameters for GammaShapeRate prior
# Visualization parameters
x_limits[2,type=float] # Plot boundaries (x-axis)
y_limits[2,type=float] # Plot boundaries (y-axis)
fps[1,type=int] # Animation frames per second
heatmap_resolution[1,type=int] # Heatmap resolution
plot_width[1,type=int] # Plot width
plot_height[1,type=int] # Plot height
agent_alpha[1,type=float] # Visualization alpha for agents
target_alpha[1,type=float] # Visualization alpha for targets
color_palette[1,type=string] # Color palette for visualization
# Environment definitions
door_obstacle_center_1[2,type=float] # Door environment, obstacle 1 center
door_obstacle_size_1[2,type=float] # Door environment, obstacle 1 size
door_obstacle_center_2[2,type=float] # Door environment, obstacle 2 center
door_obstacle_size_2[2,type=float] # Door environment, obstacle 2 size
wall_obstacle_center[2,type=float] # Wall environment, obstacle center
wall_obstacle_size[2,type=float] # Wall environment, obstacle size
combined_obstacle_center_1[2,type=float] # Combined environment, obstacle 1 center
combined_obstacle_size_1[2,type=float] # Combined environment, obstacle 1 size
combined_obstacle_center_2[2,type=float] # Combined environment, obstacle 2 center
combined_obstacle_size_2[2,type=float] # Combined environment, obstacle 2 size
combined_obstacle_center_3[2,type=float] # Combined environment, obstacle 3 center
combined_obstacle_size_3[2,type=float] # Combined environment, obstacle 3 size
# Agent configurations
agent1_id[1,type=int] # Agent 1 ID
agent1_radius[1,type=float] # Agent 1 radius
agent1_initial_position[2,type=float] # Agent 1 initial position
agent1_target_position[2,type=float] # Agent 1 target position
agent2_id[1,type=int] # Agent 2 ID
agent2_radius[1,type=float] # Agent 2 radius
agent2_initial_position[2,type=float] # Agent 2 initial position
agent2_target_position[2,type=float] # Agent 2 target position
agent3_id[1,type=int] # Agent 3 ID
agent3_radius[1,type=float] # Agent 3 radius
agent3_initial_position[2,type=float] # Agent 3 initial position
agent3_target_position[2,type=float] # Agent 3 target position
agent4_id[1,type=int] # Agent 4 ID
agent4_radius[1,type=float] # Agent 4 radius
agent4_initial_position[2,type=float] # Agent 4 initial position
agent4_target_position[2,type=float] # Agent 4 target position
# Experiment configurations
experiment_seeds[2,type=int] # Random seeds for reproducibility
results_dir[1,type=string] # Base directory for results
animation_template[1,type=string] # Filename template for animations
control_vis_filename[1,type=string] # Filename for control visualization
obstacle_distance_filename[1,type=string] # Filename for obstacle distance plot
path_uncertainty_filename[1,type=string] # Filename for path uncertainty plot
convergence_filename[1,type=string] # Filename for convergence plot
# Model parameters
dt > A
(A, B, C) > state_space_model
# Agent trajectories
(state_space_model, initial_state_variance, control_variance) > agent_trajectories
# Goal constraints
(agent_trajectories, goal_constraint_variance) > goal_directed_behavior
# Obstacle avoidance
(agent_trajectories, gamma, gamma_shape, gamma_scale_factor) > obstacle_avoidance
# Collision avoidance
(agent_trajectories, nr_agents) > collision_avoidance
# Complete planning system
(goal_directed_behavior, obstacle_avoidance, collision_avoidance) > planning_system
# Model parameters
dt=1.0
gamma=1.0
nr_steps=40
nr_iterations=350
nr_agents=4
softmin_temperature=10.0
intermediate_steps=10
save_intermediates=false
# State space matrices
# A = [1 dt 0 0; 0 1 0 0; 0 0 1 dt; 0 0 0 1]
A={(1.0, 1.0, 0.0, 0.0), (0.0, 1.0, 0.0, 0.0), (0.0, 0.0, 1.0, 1.0), (0.0, 0.0, 0.0, 1.0)}
# B = [0 0; dt 0; 0 0; 0 dt]
B={(0.0, 0.0), (1.0, 0.0), (0.0, 0.0), (0.0, 1.0)}
# C = [1 0 0 0; 0 0 1 0]
C={(1.0, 0.0, 0.0, 0.0), (0.0, 0.0, 1.0, 0.0)}
# Prior distributions
initial_state_variance=100.0
control_variance=0.1
goal_constraint_variance=0.00001
gamma_shape=1.5
gamma_scale_factor=0.5
# Visualization parameters
x_limits={(-20, 20)}
y_limits={(-20, 20)}
fps=15
heatmap_resolution=100
plot_width=800
plot_height=400
agent_alpha=1.0
target_alpha=0.2
color_palette="tab10"
# Environment definitions
door_obstacle_center_1={(-40.0, 0.0)}
door_obstacle_size_1={(70.0, 5.0)}
door_obstacle_center_2={(40.0, 0.0)}
door_obstacle_size_2={(70.0, 5.0)}
wall_obstacle_center={(0.0, 0.0)}
wall_obstacle_size={(10.0, 5.0)}
combined_obstacle_center_1={(-50.0, 0.0)}
combined_obstacle_size_1={(70.0, 2.0)}
combined_obstacle_center_2={(50.0, 0.0)}
combined_obstacle_size_2={(70.0, 2.0)}
combined_obstacle_center_3={(5.0, -1.0)}
combined_obstacle_size_3={(3.0, 10.0)}
# Agent configurations
agent1_id=1
agent1_radius=2.5
agent1_initial_position={(-4.0, 10.0)}
agent1_target_position={(-10.0, -10.0)}
agent2_id=2
agent2_radius=1.5
agent2_initial_position={(-10.0, 5.0)}
agent2_target_position={(10.0, -15.0)}
agent3_id=3
agent3_radius=1.0
agent3_initial_position={(-15.0, -10.0)}
agent3_target_position={(10.0, 10.0)}
agent4_id=4
agent4_radius=2.5
agent4_initial_position={(0.0, -10.0)}
agent4_target_position={(-10.0, 15.0)}
# Experiment configurations
experiment_seeds={(42, 123)}
results_dir="results"
animation_template="{environment}_{seed}.gif"
control_vis_filename="control_signals.gif"
obstacle_distance_filename="obstacle_distance.png"
path_uncertainty_filename="path_uncertainty.png"
convergence_filename="convergence.png"
# State space model:
# x_{t+1} = A * x_t + B * u_t + w_t, w_t ~ N(0, control_variance)
# y_t = C * x_t + v_t, v_t ~ N(0, observation_variance)
#
# Obstacle avoidance constraint:
# p(x_t | obstacle) ~ N(d(x_t, obstacle), gamma)
# where d(x_t, obstacle) is the distance from position x_t to the nearest obstacle
#
# Goal constraint:
# p(x_T | goal) ~ N(goal, goal_constraint_variance)
# where x_T is the final position
#
# Collision avoidance constraint:
# p(x_i, x_j) ~ N(||x_i - x_j|| - (r_i + r_j), gamma)
# where x_i, x_j are positions of agents i and j, r_i, r_j are their radii
Dynamic
DiscreteTime
ModelTimeHorizon=nr_steps
dt=TimeStep
gamma=ConstraintParameter
nr_steps=TrajectoryLength
nr_iterations=InferenceIterations
nr_agents=NumberOfAgents
softmin_temperature=SoftminTemperature
A=StateTransitionMatrix
B=ControlInputMatrix
C=ObservationMatrix
initial_state_variance=InitialStateVariance
control_variance=ControlVariance
goal_constraint_variance=GoalConstraintVariance
nr_agents=4
nr_steps=40
nr_iterations=350
Multi-agent Trajectory Planning - GNN Representation for RxInfer.jl
Creator: AI Assistant for GNN
Date: 2024-07-27
Status: Example for RxInfer.jl multi-agent trajectory planning
{
"_HeaderComments": "# GNN Example: RxInfer Multi-agent Trajectory Planning\n# Format: Markdown representation of a Multi-agent Trajectory Planning model for RxInfer.jl\n# Version: 1.0\n# This file is machine-readable and represents the configuration for the RxInfer.jl multi-agent trajectory planning example.",
"ModelName": "Multi-agent Trajectory Planning",
"GNNSection": "RxInferMultiAgentTrajectoryPlanning",
"GNNVersionAndFlags": "GNN v1",
"ModelAnnotation": "This model represents a multi-agent trajectory planning scenario in RxInfer.jl.\nIt includes:\n- State space model for agents moving in a 2D environment\n- Obstacle avoidance constraints\n- Goal-directed behavior\n- Inter-agent collision avoidance\nThe model can be used to simulate trajectory planning in various environments with obstacles.",
"StateSpaceBlock": "# Model parameters\ndt[1,type=float] # Time step for the state space model\ngamma[1,type=float] # Constraint parameter for the Halfspace node\nnr_steps[1,type=int] # Number of time steps in the trajectory\nnr_iterations[1,type=int] # Number of inference iterations\nnr_agents[1,type=int] # Number of agents in the simulation\nsoftmin_temperature[1,type=float] # Temperature parameter for the softmin function\nintermediate_steps[1,type=int] # Intermediate results saving interval\nsave_intermediates[1,type=bool] # Whether to save intermediate results\n\n# State space matrices\nA[4,4,type=float] # State transition matrix\nB[4,2,type=float] # Control input matrix\nC[2,4,type=float] # Observation matrix\n\n# Prior distributions\ninitial_state_variance[1,type=float] # Prior on initial state\ncontrol_variance[1,type=float] # Prior on control inputs\ngoal_constraint_variance[1,type=float] # Goal constraints variance\ngamma_shape[1,type=float] # Parameters for GammaShapeRate prior\ngamma_scale_factor[1,type=float] # Parameters for GammaShapeRate prior\n\n# Visualization parameters\nx_limits[2,type=float] # Plot boundaries (x-axis)\ny_limits[2,type=float] # Plot boundaries (y-axis)\nfps[1,type=int] # Animation frames per second\nheatmap_resolution[1,type=int] # Heatmap resolution\nplot_width[1,type=int] # Plot width\nplot_height[1,type=int] # Plot height\nagent_alpha[1,type=float] # Visualization alpha for agents\ntarget_alpha[1,type=float] # Visualization alpha for targets\ncolor_palette[1,type=string] # Color palette for visualization\n\n# Environment definitions\ndoor_obstacle_center_1[2,type=float] # Door environment, obstacle 1 center\ndoor_obstacle_size_1[2,type=float] # Door environment, obstacle 1 size\ndoor_obstacle_center_2[2,type=float] # Door environment, obstacle 2 center\ndoor_obstacle_size_2[2,type=float] # Door environment, obstacle 2 size\n\nwall_obstacle_center[2,type=float] # Wall environment, obstacle center\nwall_obstacle_size[2,type=float] # Wall environment, obstacle size\n\ncombined_obstacle_center_1[2,type=float] # Combined environment, obstacle 1 center\ncombined_obstacle_size_1[2,type=float] # Combined environment, obstacle 1 size\ncombined_obstacle_center_2[2,type=float] # Combined environment, obstacle 2 center\ncombined_obstacle_size_2[2,type=float] # Combined environment, obstacle 2 size\ncombined_obstacle_center_3[2,type=float] # Combined environment, obstacle 3 center\ncombined_obstacle_size_3[2,type=float] # Combined environment, obstacle 3 size\n\n# Agent configurations\nagent1_id[1,type=int] # Agent 1 ID\nagent1_radius[1,type=float] # Agent 1 radius\nagent1_initial_position[2,type=float] # Agent 1 initial position\nagent1_target_position[2,type=float] # Agent 1 target position\n\nagent2_id[1,type=int] # Agent 2 ID\nagent2_radius[1,type=float] # Agent 2 radius\nagent2_initial_position[2,type=float] # Agent 2 initial position\nagent2_target_position[2,type=float] # Agent 2 target position\n\nagent3_id[1,type=int] # Agent 3 ID\nagent3_radius[1,type=float] # Agent 3 radius\nagent3_initial_position[2,type=float] # Agent 3 initial position\nagent3_target_position[2,type=float] # Agent 3 target position\n\nagent4_id[1,type=int] # Agent 4 ID\nagent4_radius[1,type=float] # Agent 4 radius\nagent4_initial_position[2,type=float] # Agent 4 initial position\nagent4_target_position[2,type=float] # Agent 4 target position\n\n# Experiment configurations\nexperiment_seeds[2,type=int] # Random seeds for reproducibility\nresults_dir[1,type=string] # Base directory for results\nanimation_template[1,type=string] # Filename template for animations\ncontrol_vis_filename[1,type=string] # Filename for control visualization\nobstacle_distance_filename[1,type=string] # Filename for obstacle distance plot\npath_uncertainty_filename[1,type=string] # Filename for path uncertainty plot\nconvergence_filename[1,type=string] # Filename for convergence plot",
"Connections": "# Model parameters\ndt > A\n(A, B, C) > state_space_model\n\n# Agent trajectories\n(state_space_model, initial_state_variance, control_variance) > agent_trajectories\n\n# Goal constraints\n(agent_trajectories, goal_constraint_variance) > goal_directed_behavior\n\n# Obstacle avoidance\n(agent_trajectories, gamma, gamma_shape, gamma_scale_factor) > obstacle_avoidance\n\n# Collision avoidance\n(agent_trajectories, nr_agents) > collision_avoidance\n\n# Complete planning system\n(goal_directed_behavior, obstacle_avoidance, collision_avoidance) > planning_system",
"InitialParameterization": "# Model parameters\ndt=1.0\ngamma=1.0\nnr_steps=40\nnr_iterations=350\nnr_agents=4\nsoftmin_temperature=10.0\nintermediate_steps=10\nsave_intermediates=false\n\n# State space matrices\n# A = [1 dt 0 0; 0 1 0 0; 0 0 1 dt; 0 0 0 1]\nA={(1.0, 1.0, 0.0, 0.0), (0.0, 1.0, 0.0, 0.0), (0.0, 0.0, 1.0, 1.0), (0.0, 0.0, 0.0, 1.0)}\n\n# B = [0 0; dt 0; 0 0; 0 dt]\nB={(0.0, 0.0), (1.0, 0.0), (0.0, 0.0), (0.0, 1.0)}\n\n# C = [1 0 0 0; 0 0 1 0]\nC={(1.0, 0.0, 0.0, 0.0), (0.0, 0.0, 1.0, 0.0)}\n\n# Prior distributions\ninitial_state_variance=100.0\ncontrol_variance=0.1\ngoal_constraint_variance=0.00001\ngamma_shape=1.5\ngamma_scale_factor=0.5\n\n# Visualization parameters\nx_limits={(-20, 20)}\ny_limits={(-20, 20)}\nfps=15\nheatmap_resolution=100\nplot_width=800\nplot_height=400\nagent_alpha=1.0\ntarget_alpha=0.2\ncolor_palette=\"tab10\"\n\n# Environment definitions\ndoor_obstacle_center_1={(-40.0, 0.0)}\ndoor_obstacle_size_1={(70.0, 5.0)}\ndoor_obstacle_center_2={(40.0, 0.0)}\ndoor_obstacle_size_2={(70.0, 5.0)}\n\nwall_obstacle_center={(0.0, 0.0)}\nwall_obstacle_size={(10.0, 5.0)}\n\ncombined_obstacle_center_1={(-50.0, 0.0)}\ncombined_obstacle_size_1={(70.0, 2.0)}\ncombined_obstacle_center_2={(50.0, 0.0)}\ncombined_obstacle_size_2={(70.0, 2.0)}\ncombined_obstacle_center_3={(5.0, -1.0)}\ncombined_obstacle_size_3={(3.0, 10.0)}\n\n# Agent configurations\nagent1_id=1\nagent1_radius=2.5\nagent1_initial_position={(-4.0, 10.0)}\nagent1_target_position={(-10.0, -10.0)}\n\nagent2_id=2\nagent2_radius=1.5\nagent2_initial_position={(-10.0, 5.0)}\nagent2_target_position={(10.0, -15.0)}\n\nagent3_id=3\nagent3_radius=1.0\nagent3_initial_position={(-15.0, -10.0)}\nagent3_target_position={(10.0, 10.0)}\n\nagent4_id=4\nagent4_radius=2.5\nagent4_initial_position={(0.0, -10.0)}\nagent4_target_position={(-10.0, 15.0)}\n\n# Experiment configurations\nexperiment_seeds={(42, 123)}\nresults_dir=\"results\"\nanimation_template=\"{environment}_{seed}.gif\"\ncontrol_vis_filename=\"control_signals.gif\"\nobstacle_distance_filename=\"obstacle_distance.png\"\npath_uncertainty_filename=\"path_uncertainty.png\"\nconvergence_filename=\"convergence.png\"",
"Equations": "# State space model:\n# x_{t+1} = A * x_t + B * u_t + w_t, w_t ~ N(0, control_variance)\n# y_t = C * x_t + v_t, v_t ~ N(0, observation_variance)\n#\n# Obstacle avoidance constraint:\n# p(x_t | obstacle) ~ N(d(x_t, obstacle), gamma)\n# where d(x_t, obstacle) is the distance from position x_t to the nearest obstacle\n#\n# Goal constraint:\n# p(x_T | goal) ~ N(goal, goal_constraint_variance)\n# where x_T is the final position\n#\n# Collision avoidance constraint:\n# p(x_i, x_j) ~ N(||x_i - x_j|| - (r_i + r_j), gamma)\n# where x_i, x_j are positions of agents i and j, r_i, r_j are their radii",
"Time": "Dynamic\nDiscreteTime\nModelTimeHorizon=nr_steps",
"ActInfOntologyAnnotation": "dt=TimeStep\ngamma=ConstraintParameter\nnr_steps=TrajectoryLength\nnr_iterations=InferenceIterations\nnr_agents=NumberOfAgents\nsoftmin_temperature=SoftminTemperature\nA=StateTransitionMatrix\nB=ControlInputMatrix\nC=ObservationMatrix\ninitial_state_variance=InitialStateVariance\ncontrol_variance=ControlVariance\ngoal_constraint_variance=GoalConstraintVariance",
"ModelParameters": "nr_agents=4\nnr_steps=40\nnr_iterations=350",
"Footer": "Multi-agent Trajectory Planning - GNN Representation for RxInfer.jl",
"Signature": "Creator: AI Assistant for GNN\nDate: 2024-07-27\nStatus: Example for RxInfer.jl multi-agent trajectory planning"
}full_model_data.json{
"ModelName": "Multi-agent Trajectory Planning",
"ModelAnnotation": "This model represents a multi-agent trajectory planning scenario in RxInfer.jl.\nIt includes:\n- State space model for agents moving in a 2D environment\n- Obstacle avoidance constraints\n- Goal-directed behavior\n- Inter-agent collision avoidance\nThe model can be used to simulate trajectory planning in various environments with obstacles.",
"GNNVersionAndFlags": "GNN v1",
"Time": "Dynamic\nDiscreteTime\nModelTimeHorizon=nr_steps",
"ActInfOntologyAnnotation": "dt=TimeStep\ngamma=ConstraintParameter\nnr_steps=TrajectoryLength\nnr_iterations=InferenceIterations\nnr_agents=NumberOfAgents\nsoftmin_temperature=SoftminTemperature\nA=StateTransitionMatrix\nB=ControlInputMatrix\nC=ObservationMatrix\ninitial_state_variance=InitialStateVariance\ncontrol_variance=ControlVariance\ngoal_constraint_variance=GoalConstraintVariance"
}model_metadata.json🗓️ Report Generated: 2025-06-07 08:31:04
MCP Core Directory: /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/mcp
Project Source Root (for modules): /home/trim/Documents/GitHub/GeneralizedNotationNotation/src
Output Directory for this report: /home/trim/Documents/GitHub/GeneralizedNotationNotation/output/mcp_processing_step
This section lists all tools currently registered with the MCP system, along with their defining module, arguments, and description.
ensure_directory_existssrc.setup.mcp(directory_path)json
{
"directory_path": {
"type": "string",
"description": "Path of the directory to create if it doesn't exist."
}
}estimate_resources_for_gnn_directorysrc.gnn_type_checker.mcp(dir_path, recursive)json
{
"dir_path": {
"type": "string",
"description": "Path to the directory for GNN resource estimation."
},
"recursive": {
"type": "boolean",
"description": "Search directory recursively. Defaults to False.",
"optional": true
}
}estimate_resources_for_gnn_filesrc.gnn_type_checker.mcp(file_path)json
{
"file_path": {
"type": "string",
"description": "Path to the GNN file for resource estimation."
}
}export_gnn_to_gexfsrc.export.mcp(gnn_file_path, output_file_path)json
{
"gnn_file_path": {
"type": "string",
"description": "Path to the input GNN Markdown file (.gnn.md)."
},
"output_file_path": {
"type": "string",
"description": "Path where the exported file will be saved."
}
}export_gnn_to_graphmlsrc.export.mcp(gnn_file_path, output_file_path)json
{
"gnn_file_path": {
"type": "string",
"description": "Path to the input GNN Markdown file (.gnn.md)."
},
"output_file_path": {
"type": "string",
"description": "Path where the exported file will be saved."
}
}export_gnn_to_jsonsrc.export.mcp(gnn_file_path, output_file_path)json
{
"gnn_file_path": {
"type": "string",
"description": "Path to the input GNN Markdown file (.gnn.md)."
},
"output_file_path": {
"type": "string",
"description": "Path where the exported file will be saved."
}
}export_gnn_to_json_adjacency_listsrc.export.mcp(gnn_file_path, output_file_path)json
{
"gnn_file_path": {
"type": "string",
"description": "Path to the input GNN Markdown file (.gnn.md)."
},
"output_file_path": {
"type": "string",
"description": "Path where the exported file will be saved."
}
}export_gnn_to_plaintext_dslsrc.export.mcp(gnn_file_path, output_file_path)json
{
"gnn_file_path": {
"type": "string",
"description": "Path to the input GNN Markdown file (.gnn.md)."
},
"output_file_path": {
"type": "string",
"description": "Path where the exported file will be saved."
}
}export_gnn_to_plaintext_summarysrc.export.mcp(gnn_file_path, output_file_path)json
{
"gnn_file_path": {
"type": "string",
"description": "Path to the input GNN Markdown file (.gnn.md)."
},
"output_file_path": {
"type": "string",
"description": "Path where the exported file will be saved."
}
}export_gnn_to_python_picklesrc.export.mcp(gnn_file_path, output_file_path)json
{
"gnn_file_path": {
"type": "string",
"description": "Path to the input GNN Markdown file (.gnn.md)."
},
"output_file_path": {
"type": "string",
"description": "Path where the exported file will be saved."
}
}export_gnn_to_xmlsrc.export.mcp(gnn_file_path, output_file_path)json
{
"gnn_file_path": {
"type": "string",
"description": "Path to the input GNN Markdown file (.gnn.md)."
},
"output_file_path": {
"type": "string",
"description": "Path where the exported file will be saved."
}
}find_project_gnn_filessrc.setup.mcp(search_directory, recursive)json
{
"search_directory": {
"type": "string",
"description": "The directory to search for GNN (.md) files."
},
"recursive": {
"type": "boolean",
"description": "Set to true to search recursively. Defaults to false.",
"optional": true
}
}generate_pipeline_summary_sitesrc.site.mcp(output_dir, site_output_filename, verbose)json
{
"type": "object",
"properties": {
"output_dir": {
"type": "string",
"description": "The main pipeline output directory to scan for results."
},
"site_output_filename": {
"type": "string",
"description": "The filename for the output HTML report (e.g., 'summary.html')."
},
"verbose": {
"type": "boolean",
"description": "Enable verbose logging for the generator."
}
},
"required": [
"output_dir",
"site_output_filename"
]
}get_gnn_documentationsrc.gnn.mcp(doc_name)json
{
"doc_name": {
"type": "string",
"description": "Name of the GNN document (e.g., 'file_structure', 'punctuation')",
"enum": [
"file_structure",
"punctuation"
]
}
}get_standard_output_pathssrc.setup.mcp(base_output_directory)json
{
"base_output_directory": {
"type": "string",
"description": "The base directory where output subdirectories will be managed."
}
}list_render_targetssrc.render.mcp()json
{
"properties": {},
"title": "ListRenderTargetsInput",
"type": "object"
}llm.explain_gnn_filesrc.llm.mcp(file_path_str, aspect_to_explain)json
{
"type": "object",
"properties": {
"file_path_str": {
"type": "string",
"description": "The absolute or relative path to the GNN file."
},
"aspect_to_explain": {
"type": "string",
"description": "(Optional) A specific part or concept within the GNN to focus the explanation on."
}
},
"required": [
"file_path_str"
]
}llm.generate_professional_summarysrc.llm.mcp(file_path_str, experiment_details, target_audience)json
{
"type": "object",
"properties": {
"file_path_str": {
"type": "string",
"description": "The absolute or relative path to the GNN file."
},
"experiment_details": {
"type": "string",
"description": "(Optional) Text describing the experiments conducted with the model, including setup, results, or observations."
},
"target_audience": {
"type": "string",
"description": "(Optional) The intended audience for the summary (e.g., 'fellow researchers', 'project managers'). Default: 'fellow researchers'."
}
},
"required": [
"file_path_str"
]
}llm.summarize_gnn_filesrc.llm.mcp(file_path_str, user_prompt_suffix)json
{
"type": "object",
"properties": {
"file_path_str": {
"type": "string",
"description": "The absolute or relative path to the GNN file (.md, .gnn.md, .json)."
},
"user_prompt_suffix": {
"type": "string",
"description": "(Optional) Additional instructions or focus points for the summary."
}
},
"required": [
"file_path_str"
]
}parse_gnn_filesrc.visualization.mcp(file_path)json
{
"file_path": {
"type": "string",
"description": "Path to the GNN file to parse"
}
}render_gnn_specificationsrc.render.mcp(input_data)json
{
"properties": {
"gnn_specification": {
"anyOf": [
{
"additionalProperties": true,
"type": "object"
},
{
"type": "string"
}
],
"description": "The GNN specification itself as a dictionary, or a string URI/path to a GNN spec file (e.g., JSON).",
"title": "Gnn Specification"
},
"target_format": {
"description": "The target format to render the GNN specification to.",
"enum": [
"pymdp",
"rxinfer_toml"
],
"title": "Target Format",
"type": "string"
},
"output_filename_base": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Optional desired base name for the output file (e.g., 'my_model'). Extension is added automatically. If None, derived from GNN spec name or input file name.",
"title": "Output Filename Base"
},
"render_options": {
"anyOf": [
{
"additionalProperties": true,
"type": "object"
},
{
"type": "null"
}
],
"default": null,
"description": "Optional dictionary of specific options for the chosen renderer (e.g., data_bindings for RxInfer).",
"title": "Render Options"
}
},
"required": [
"gnn_specification",
"target_format"
],
"title": "RenderGnnInput",
"type": "object"
}run_gnn_type_checkersrc.tests.mcp(file_path)json
{
"file_path": {
"type": "string",
"description": "Path to the GNN file to check"
}
}run_gnn_type_checker_on_directorysrc.tests.mcp(dir_path, report_file)json
{
"dir_path": {
"type": "string",
"description": "Path to directory containing GNN files"
},
"report_file": {
"type": "string",
"description": "Optional path to save the report"
}
}run_gnn_unit_testssrc.tests.mcp()json
No schema provided.sympy_analyze_stabilitysrc.mcp.sympy_mcp(transition_matrices)json
{
"type": "object",
"properties": {
"transition_matrices": {
"type": "array",
"description": "List of transition matrices to analyze"
}
},
"required": [
"transition_matrices"
]
}sympy_cleanupsrc.mcp.sympy_mcp()json
{
"type": "object",
"properties": {}
}sympy_get_latexsrc.mcp.sympy_mcp(expression)json
{
"type": "object",
"properties": {
"expression": {
"type": "string",
"description": "Expression to convert to LaTeX"
}
},
"required": [
"expression"
]
}sympy_initializesrc.mcp.sympy_mcp(server_executable)json
{
"type": "object",
"properties": {
"server_executable": {
"type": "string",
"description": "Path to SymPy MCP server executable",
"default": null
}
}
}sympy_simplify_expressionsrc.mcp.sympy_mcp(expression)json
{
"type": "object",
"properties": {
"expression": {
"type": "string",
"description": "Mathematical expression to simplify"
}
},
"required": [
"expression"
]
}sympy_solve_equationsrc.mcp.sympy_mcp(equation, variable, domain)json
{
"type": "object",
"properties": {
"equation": {
"type": "string",
"description": "Equation to solve"
},
"variable": {
"type": "string",
"description": "Variable to solve for"
},
"domain": {
"type": "string",
"description": "Solution domain (COMPLEX, REAL, etc.)",
"default": "COMPLEX"
}
},
"required": [
"equation",
"variable"
]
}sympy_validate_equationsrc.mcp.sympy_mcp(equation, context)json
{
"type": "object",
"properties": {
"equation": {
"type": "string",
"description": "Mathematical equation to validate"
},
"context": {
"type": "object",
"description": "GNN context for variable definitions",
"default": {}
}
},
"required": [
"equation"
]
}sympy_validate_matrixsrc.mcp.sympy_mcp(matrix_data, matrix_type)json
{
"type": "object",
"properties": {
"matrix_data": {
"type": "array",
"description": "Matrix data as array of arrays"
},
"matrix_type": {
"type": "string",
"description": "Type of matrix (transition, observation, etc.)",
"default": "transition"
}
},
"required": [
"matrix_data"
]
}type_check_gnn_directorysrc.gnn_type_checker.mcp(dir_path, recursive, output_dir_base, report_md_filename)json
{
"dir_path": {
"type": "string",
"description": "Path to the directory containing GNN files to be type-checked."
},
"recursive": {
"type": "boolean",
"description": "Search directory recursively. Defaults to False.",
"optional": true
},
"output_dir_base": {
"type": "string",
"description": "Optional base directory to save the report and other artifacts (HTML, JSON).",
"optional": true
},
"report_md_filename": {
"type": "string",
"description": "Optional filename for the markdown report (e.g., 'my_report.md'). Defaults to 'type_check_report.md'.",
"optional": true
}
}type_check_gnn_filesrc.gnn_type_checker.mcp(file_path)json
{
"file_path": {
"type": "string",
"description": "Path to the GNN file to be type-checked."
}
}visualize_gnn_directorysrc.visualization.mcp(dir_path, output_dir)json
{
"dir_path": {
"type": "string",
"description": "Path to directory containing GNN files"
},
"output_dir": {
"type": "string",
"description": "Optional output directory"
}
}visualize_gnn_filesrc.visualization.mcp(file_path, output_dir)json
{
"file_path": {
"type": "string",
"description": "Path to the GNN file to visualize"
},
"output_dir": {
"type": "string",
"description": "Optional output directory"
}
}This section verifies the presence of essential MCP files in the core directory: /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/mcp
mcp.py: Found (20304 bytes)meta_mcp.py: Found (4954 bytes)cli.py: Found (4644 bytes)server_stdio.py: Found (7620 bytes)server_http.py: Found (7731 bytes)Status: 5/5 core MCP files found. All core files seem present.
Checking for mcp.py in these subdirectories of /home/trim/Documents/GitHub/GeneralizedNotationNotation/src: ['export', 'gnn', 'gnn_type_checker', 'ontology', 'setup', 'tests', 'visualization', 'llm']
export (at /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/export)mcp.py Status: Found (7976 bytes)def _handle_export(export_func, gnn_file_path, output_file_path, format_name, requires_nx) (AST parsed) - *"Generic helper to run an export function and handle common exceptions."def export_gnn_to_gexf(gnn_file_path, output_file_path) - *Description: "Exports a GNN model to GEXF graph format (requires NetworkX)."json
{
"gnn_file_path": {
"type": "string",
"description": "Path to the input GNN Markdown file (.gnn.md)."
},
"output_file_path": {
"type": "string",
"description": "Path where the exported file will be saved."
}
}def export_gnn_to_gexf_mcp(gnn_file_path, output_file_path) (AST parsed)def export_gnn_to_graphml(gnn_file_path, output_file_path) - *Description: "Exports a GNN model to GraphML graph format (requires NetworkX)."json
{
"gnn_file_path": {
"type": "string",
"description": "Path to the input GNN Markdown file (.gnn.md)."
},
"output_file_path": {
"type": "string",
"description": "Path where the exported file will be saved."
}
}def export_gnn_to_graphml_mcp(gnn_file_path, output_file_path) (AST parsed)def export_gnn_to_json(gnn_file_path, output_file_path) - *Description: "Exports a GNN model to JSON format."json
{
"gnn_file_path": {
"type": "string",
"description": "Path to the input GNN Markdown file (.gnn.md)."
},
"output_file_path": {
"type": "string",
"description": "Path where the exported file will be saved."
}
}def export_gnn_to_json_adjacency_list(gnn_file_path, output_file_path) - *Description: "Exports a GNN model to JSON Adjacency List graph format (requires NetworkX)."json
{
"gnn_file_path": {
"type": "string",
"description": "Path to the input GNN Markdown file (.gnn.md)."
},
"output_file_path": {
"type": "string",
"description": "Path where the exported file will be saved."
}
}def export_gnn_to_json_adjacency_list_mcp(gnn_file_path, output_file_path) (AST parsed)def export_gnn_to_json_mcp(gnn_file_path, output_file_path) (AST parsed)def export_gnn_to_plaintext_dsl(gnn_file_path, output_file_path) - *Description: "Exports a GNN model back to its GNN DSL plain text format."json
{
"gnn_file_path": {
"type": "string",
"description": "Path to the input GNN Markdown file (.gnn.md)."
},
"output_file_path": {
"type": "string",
"description": "Path where the exported file will be saved."
}
}def export_gnn_to_plaintext_dsl_mcp(gnn_file_path, output_file_path) (AST parsed)def export_gnn_to_plaintext_summary(gnn_file_path, output_file_path) - *Description: "Exports a GNN model to a human-readable plain text summary."json
{
"gnn_file_path": {
"type": "string",
"description": "Path to the input GNN Markdown file (.gnn.md)."
},
"output_file_path": {
"type": "string",
"description": "Path where the exported file will be saved."
}
}def export_gnn_to_plaintext_summary_mcp(gnn_file_path, output_file_path) (AST parsed)def export_gnn_to_python_pickle(gnn_file_path, output_file_path) - *Description: "Serializes a GNN model to a Python pickle file."json
{
"gnn_file_path": {
"type": "string",
"description": "Path to the input GNN Markdown file (.gnn.md)."
},
"output_file_path": {
"type": "string",
"description": "Path where the exported file will be saved."
}
}def export_gnn_to_python_pickle_mcp(gnn_file_path, output_file_path) (AST parsed)def export_gnn_to_xml(gnn_file_path, output_file_path) - *Description: "Exports a GNN model to XML format."json
{
"gnn_file_path": {
"type": "string",
"description": "Path to the input GNN Markdown file (.gnn.md)."
},
"output_file_path": {
"type": "string",
"description": "Path where the exported file will be saved."
}
}def export_gnn_to_xml_mcp(gnn_file_path, output_file_path) (AST parsed)def register_tools(mcp_instance) (AST parsed) - *"Registers all GNN export tools with the MCP instance."gnn (at /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn)mcp.py Status: Found (4122 bytes)def _retrieve_gnn_doc_resource(uri) (AST parsed) - *"Retrieve GNN documentation resource by URI."def get_gnn_documentation(doc_name) - *Description: "Retrieve the content of a GNN core documentation file (e.g., syntax, file structure)."json
{
"doc_name": {
"type": "string",
"description": "Name of the GNN document (e.g., 'file_structure', 'punctuation')",
"enum": [
"file_structure",
"punctuation"
]
}
}def register_tools(mcp_instance) (AST parsed) - *"Register GNN documentation tools and resources with the MCP."gnn_type_checker (at /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn_type_checker)mcp.py Status: Found (10921 bytes)def estimate_resources_for_gnn_directory(dir_path, recursive) - *Description: "Estimates computational resources for all GNN files in a specified directory."json
{
"dir_path": {
"type": "string",
"description": "Path to the directory for GNN resource estimation."
},
"recursive": {
"type": "boolean",
"description": "Search directory recursively. Defaults to False.",
"optional": true
}
}def estimate_resources_for_gnn_directory_mcp(dir_path, recursive) (AST parsed) - *"Estimate resources for all GNN files in a directory. Exposed via MCP."def estimate_resources_for_gnn_file(file_path) - *Description: "Estimates computational resources (memory, inference, storage) for a GNN model file."json
{
"file_path": {
"type": "string",
"description": "Path to the GNN file for resource estimation."
}
}def estimate_resources_for_gnn_file_mcp(file_path) (AST parsed) - *"Estimate computational resources for a single GNN file. Exposed via MCP."def register_tools(mcp_instance) (AST parsed) - *"Register GNN type checker and resource estimator tools with the MCP."def type_check_gnn_directory(dir_path, recursive, output_dir_base, report_md_filename) - *Description: "Runs the GNN type checker on all GNN files in a specified directory. If output_dir_base is provided, reports are generated."json
{
"dir_path": {
"type": "string",
"description": "Path to the directory containing GNN files to be type-checked."
},
"recursive": {
"type": "boolean",
"description": "Search directory recursively. Defaults to False.",
"optional": true
},
"output_dir_base": {
"type": "string",
"description": "Optional base directory to save the report and other artifacts (HTML, JSON).",
"optional": true
},
"report_md_filename": {
"type": "string",
"description": "Optional filename for the markdown report (e.g., 'my_report.md'). Defaults to 'type_check_report.md'.",
"optional": true
}
}def type_check_gnn_directory_mcp(dir_path, recursive, output_dir_base, report_md_filename) (AST parsed) - *"Run the GNN type checker on all GNN files in a directory. Exposed via MCP."def type_check_gnn_file(file_path) - *Description: "Runs the GNN type checker on a specified GNN model file."json
{
"file_path": {
"type": "string",
"description": "Path to the GNN file to be type-checked."
}
}def type_check_gnn_file_mcp(file_path) (AST parsed) - *"Run the GNN type checker on a single GNN file. Exposed via MCP."ontology (at /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/ontology)mcp.py Status: Found (13473 bytes)def generate_ontology_report_for_file(gnn_file_path, parsed_annotations, validation_results) (AST parsed) - *"Generates a markdown formatted report string for a single GNN file's ontology annotations."def get_mcp_interface() (AST parsed) - *"Returns the MCP interface for the Ontology module."def load_defined_ontology_terms(ontology_terms_path, verbose) (AST parsed) - *"Loads defined ontological terms from a JSON file."def parse_gnn_ontology_section(gnn_file_content, verbose) (AST parsed) - *"Parses the 'ActInfOntologyAnnotation' section from GNN file content."def validate_annotations(parsed_annotations, defined_terms, verbose) (AST parsed) - *"Validates parsed GNN annotations against a set of defined ontological terms."setup (at /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/setup)mcp.py Status: Found (4257 bytes)def ensure_directory_exists(directory_path) - *Description: "Ensures a directory exists, creating it if necessary. Returns the absolute path."json
{
"directory_path": {
"type": "string",
"description": "Path of the directory to create if it doesn't exist."
}
}def ensure_directory_exists_mcp(directory_path) (AST parsed) - *"Ensure a directory exists, creating it if necessary. Exposed via MCP."def find_project_gnn_files(search_directory, recursive) - *Description: "Finds all GNN (.md) files in a specified directory within the project."json
{
"search_directory": {
"type": "string",
"description": "The directory to search for GNN (.md) files."
},
"recursive": {
"type": "boolean",
"description": "Set to true to search recursively. Defaults to false.",
"optional": true
}
}def find_project_gnn_files_mcp(search_directory, recursive) (AST parsed) - *"Find all GNN (.md) files in a directory. Exposed via MCP."def get_standard_output_paths(base_output_directory) - *Description: "Gets a dictionary of standard output directory paths (e.g., for type_check, visualization), creating them if needed."json
{
"base_output_directory": {
"type": "string",
"description": "The base directory where output subdirectories will be managed."
}
}def get_standard_output_paths_mcp(base_output_directory) (AST parsed) - *"Get standard output paths for the pipeline. Exposed via MCP."def register_tools(mcp_instance) (AST parsed) - *"Register setup utility tools with the MCP."tests (at /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/tests)mcp.py Status: Found (7083 bytes)def get_test_report(uri) (AST parsed) - *"Retrieve a test report by URI."def register_tools(mcp) (AST parsed) - *"Register test tools with the MCP."def run_gnn_type_checker(file_path) - *Description: "Run the GNN type checker on a specific file (via test module)."json
{
"file_path": {
"type": "string",
"description": "Path to the GNN file to check"
}
}def run_gnn_type_checker_on_directory(dir_path, report_file) - *Description: "Run the GNN type checker on all GNN files in a directory (via test module)."json
{
"dir_path": {
"type": "string",
"description": "Path to directory containing GNN files"
},
"report_file": {
"type": "string",
"description": "Optional path to save the report"
}
}def run_gnn_unit_tests() - *Description: "Run the GNN unit tests and return results."def run_type_checker_on_directory(dir_path, report_file) (AST parsed) - *"Run the GNN type checker on a directory of files."def run_type_checker_on_file(file_path) (AST parsed) - *"Run the GNN type checker on a file."def run_unit_tests() (AST parsed) - *"Run the GNN unit tests."visualization (at /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/visualization)mcp.py Status: Found (5934 bytes)def get_visualization_results(uri) (AST parsed) - *"Retrieve visualization results by URI."def parse_gnn_file(file_path) - *Description: "Parse a GNN file without visualization"json
{
"file_path": {
"type": "string",
"description": "Path to the GNN file to parse"
}
}def register_tools(mcp) (AST parsed) - *"Register visualization tools with the MCP."def visualize_directory(dir_path, output_dir) (AST parsed) - *"Visualize all GNN files in a directory through MCP."def visualize_file(file_path, output_dir) (AST parsed) - *"Visualize a GNN file through MCP."def visualize_gnn_directory(dir_path, output_dir) - *Description: "Visualize all GNN files in a directory"json
{
"dir_path": {
"type": "string",
"description": "Path to directory containing GNN files"
},
"output_dir": {
"type": "string",
"description": "Optional output directory"
}
}def visualize_gnn_file(file_path, output_dir) - *Description: "Generate visualizations for a specific GNN file."json
{
"file_path": {
"type": "string",
"description": "Path to the GNN file to visualize"
},
"output_dir": {
"type": "string",
"description": "Optional output directory"
}
}llm (at /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/llm)mcp.py Status: Found (19238 bytes)def ensure_llm_tools_registered(mcp_instance_ref) (AST parsed) - *"Ensures that LLM tools are registered with the provided MCP instance."def explain_gnn_file_content(file_path_str, aspect_to_explain) (AST parsed) - *"Reads a GNN file, sends its content to an LLM, and returns an explanation."def generate_professional_summary_from_gnn(file_path_str, experiment_details, target_audience) (AST parsed) - *"Generates a professional summary of a GNN model and its experimental context."def initialize_llm_module(mcp_instance_ref) (AST parsed) - *"Initializes the LLM module, loads API key, and updates MCP status."def llm.explain_gnn_file(file_path_str, aspect_to_explain) - *Description: "Reads a GNN specification file and uses an LLM to generate an explanation of its content. Can focus on a specific aspect if provided."json
{
"type": "object",
"properties": {
"file_path_str": {
"type": "string",
"description": "The absolute or relative path to the GNN file."
},
"aspect_to_explain": {
"type": "string",
"description": "(Optional) A specific part or concept within the GNN to focus the explanation on."
}
},
"required": [
"file_path_str"
]
}def llm.generate_professional_summary(file_path_str, experiment_details, target_audience) - *Description: "Reads a GNN file and optional experiment details, then uses an LLM to generate a professional summary suitable for reports or papers."json
{
"type": "object",
"properties": {
"file_path_str": {
"type": "string",
"description": "The absolute or relative path to the GNN file."
},
"experiment_details": {
"type": "string",
"description": "(Optional) Text describing the experiments conducted with the model, including setup, results, or observations."
},
"target_audience": {
"type": "string",
"description": "(Optional) The intended audience for the summary (e.g., 'fellow researchers', 'project managers'). Default: 'fellow researchers'."
}
},
"required": [
"file_path_str"
]
}def llm.summarize_gnn_file(file_path_str, user_prompt_suffix) - *Description: "Reads a GNN specification file and uses an LLM to generate a concise summary of its content. Optionally, a user prompt suffix can refine the summary focus."json
{
"type": "object",
"properties": {
"file_path_str": {
"type": "string",
"description": "The absolute or relative path to the GNN file (.md, .gnn.md, .json)."
},
"user_prompt_suffix": {
"type": "string",
"description": "(Optional) Additional instructions or focus points for the summary."
}
},
"required": [
"file_path_str"
]
}def register_tools(mcp_instance_ref) (AST parsed)def summarize_gnn_file_content(file_path_str, user_prompt_suffix) (AST parsed) - *"Reads a GNN file, sends its content to an LLM, and returns a summary."mcp.py Integrations Found: 8/8mcp.py integration file.
Please ensure each functional module that should be exposed via MCP has its own mcp.py following the project's MCP architecture.��️ Report Generated: 2025-06-07 08:31:05
🎯 GNN Source Directory: src/gnn/examples
📖 Ontology Terms Definition: src/ontology/act_inf_ontology_terms.json (Loaded: 48 terms)
src/gnn/examples/pymdp_pomdp_agent.mdA_m0 -> LikelihoodMatrixModality0A_m1 -> LikelihoodMatrixModality1A_m2 -> LikelihoodMatrixModality2B_f0 -> TransitionMatrixFactor0B_f1 -> TransitionMatrixFactor1C_m0 -> LogPreferenceVectorModality0C_m1 -> LogPreferenceVectorModality1C_m2 -> LogPreferenceVectorModality2D_f0 -> PriorOverHiddenStatesFactor0D_f1 -> PriorOverHiddenStatesFactor1s_f0 -> HiddenStateFactor0s_f1 -> HiddenStateFactor1s_prime_f0 -> NextHiddenStateFactor0s_prime_f1 -> NextHiddenStateFactor1o_m0 -> ObservationModality0o_m1 -> ObservationModality1o_m2 -> ObservationModality2π_f1 -> PolicyVectorFactor1u_f1 -> ActionFactor1G -> ExpectedFreeEnergyValidation Summary: All ontological terms are recognized.
src/gnn/examples/rxinfer_multiagent_gnn.mddt -> TimeStep (INVALID TERM)gamma -> ConstraintParameter (INVALID TERM)nr_steps -> TrajectoryLength (INVALID TERM)nr_iterations -> InferenceIterations (INVALID TERM)nr_agents -> NumberOfAgents (INVALID TERM)softmin_temperature -> SoftminTemperature (INVALID TERM)A -> StateTransitionMatrix (INVALID TERM)B -> ControlInputMatrix (INVALID TERM)C -> ObservationMatrix (INVALID TERM)initial_state_variance -> InitialStateVariance (INVALID TERM)control_variance -> ControlVariance (INVALID TERM)goal_constraint_variance -> GoalConstraintVariance (INVALID TERM)Validation Summary: 12 unrecognized ontological term(s) found.
{
"model_purpose": "The GNN file represents a Multifactor PyMDP agent designed for active inference, incorporating multiple observation modalities and hidden state factors for decision-making in uncertain environments.",
"key_components": {
"states": {
"hidden_states": {
"reward_level": {
"states": 2,
"description": "Represents levels of reward received."
},
"decision_state": {
"states": 3,
"description": "Represents the state of decision-making."
}
}
},
"observations": {
"state_observation": {
"outcomes": 3,
"description": "Observations related to the current state."
},
"reward": {
"outcomes": 3,
"description": "Observations related to the reward received."
},
"decision_proprioceptive": {
"outcomes": 3,
"description": "Observations related to decision-making processes."
}
},
"actions": {
"decision_state": {
"actions": 3,
"description": "Controllable actions that affect the decision state."
}
},
"parameters": {
"A_matrices": "Likelihood matrices for each modality.",
"B_matrices": "Transition matrices for each hidden state factor.",
"C_vectors": "Preference vectors for each modality.",
"D_vectors": "Prior distributions for hidden states."
}
},
"component_interactions": {
"hidden_states": [
"The hidden states (s_f0, s_f1) influence the likelihood (A_m0, A_m1, A_m2) of observations.",
"The current hidden states and actions (u_f1) affect the transitions (B_f0, B_f1) to the next hidden states (s_prime_f0, s_prime_f1)."
],
"observations": [
"The observation modalities (o_m0, o_m1, o_m2) depend on the current hidden states and the corresponding likelihood matrices (A_m0, A_m1, A_m2)."
],
"policy": [
"The expected free energy (G) is influenced by preferences (C_m0, C_m1, C_m2) and is used to infer the policy (\u03c0_f1) for decision-making."
]
},
"data_types_and_dimensions": {
"A_matrices": {
"dimensions": "[3, 2, 3]",
"type": "float"
},
"B_f_matrices": {
"dimensions": {
"B_f0": "[2, 2, 1]",
"B_f1": "[3, 3, 3]"
},
"type": "float"
},
"C_vectors": {
"dimensions": {
"C_m0": "[3]",
"C_m1": "[3]",
"C_m2": "[3]"
},
"type": "float"
},
"D_vectors": {
"dimensions": {
"D_f0": "[2]",
"D_f1": "[3]"
},
"type": "float"
},
"hidden_states": {
"dimensions": {
"s_f0": "[2, 1]",
"s_f1": "[3, 1]",
"s_prime_f0": "[2, 1]",
"s_prime_f1": "[3, 1]"
},
"type": "float"
},
"observations": {
"dimensions": {
"o_m0": "[3, 1]",
"o_m1": "[3, 1]",
"o_m2": "[3, 1]"
},
"type": "float"
},
"policy": {
"dimensions": "[3]",
"type": "float"
},
"time": {
"dimensions": "[1]",
"type": "int"
}
},
"potential_applications": [
"Reinforcement learning scenarios where an agent must make decisions based on uncertain observations.",
"Robotics, where multiple sensory modalities need to be integrated for effective decision-making.",
"Simulation of cognitive processes in artificial intelligence, particularly in environments requiring active exploration and exploitation."
],
"limitations_or_ambiguities": [
"The model assumes a discrete time framework but may not adequately address continuous-time dynamics.",
"The control factor for 'reward_level' is specified as uncontrolled, which may limit the agent's ability to adapt based on rewards.",
"The model time horizon is unbounded, which may lead to computational challenges in long-term simulations."
],
"ontology_mapping_assessment": {
"ActInfOntologyTerms": {
"present": true,
"relevant": [
"LikelihoodMatrixModality0 (A_m0)",
"LikelihoodMatrixModality1 (A_m1)",
"LikelihoodMatrixModality2 (A_m2)",
"TransitionMatrixFactor0 (B_f0)",
"TransitionMatrixFactor1 (B_f1)",
"LogPreferenceVectorModality0 (C_m0)",
"LogPreferenceVectorModality1 (C_m1)",
"LogPreferenceVectorModality2 (C_m2)",
"PriorOverHiddenStatesFactor0 (D_f0)",
"PriorOverHiddenStatesFactor1 (D_f1)",
"HiddenStateFactor0 (s_f0)",
"HiddenStateFactor1 (s_f1)",
"NextHiddenStateFactor0 (s_prime_f0)",
"NextHiddenStateFactor1 (s_prime_f1)",
"ObservationModality0 (o_m0)",
"ObservationModality1 (o_m1)",
"ObservationModality2 (o_m2)",
"PolicyVectorFactor1 (\u03c0_f1)",
"ActionFactor1 (u_f1)",
"ExpectedFreeEnergy (G)"
]
}
}
}pymdp_pomdp_agent_comprehensive_analysis.json[
{
"question": "How do the multiple observation modalities influence the decision-making process of the Multifactor PyMDP Agent?",
"answer": "The GNN file does not explicitly detail how the multiple observation modalities influence the decision-making process of the Multifactor PyMDP Agent. However, it indicates that there are three observation modalities: \"state_observation,\" \"reward,\" and \"decision_proprioceptive,\" each with multiple outcomes. The likelihood matrices (A_m0, A_m1, A_m2) define how these observations relate to the hidden states, suggesting that the agent uses these observations to infer its hidden states, which subsequently inform its decision-making process through transition matrices (B_f0, B_f1) and policy vectors (\u03c0_f1). \n\nIn summary, while the file does not provide a direct explanation of the influence of the modalities on decision-making, it implies that they play a critical role in state inference and the overall decision-making framework of the agent."
},
{
"question": "What assumptions are made regarding the independence of hidden state factors in the context of this model?",
"answer": "The GNN file does not explicitly state any assumptions regarding the independence of hidden state factors in the context of the Multifactor PyMDP agent model. Therefore, there is insufficient information to answer the question about assumptions of independence regarding hidden state factors."
},
{
"question": "In what ways does the control over the 'decision_state' factor impact the agent's ability to adapt to changing environments?",
"answer": "The GNN file does not provide explicit information on how control over the 'decision_state' factor impacts the agent's ability to adapt to changing environments. It mentions that the 'decision_state' factor is controllable with 3 possible actions, but it does not elaborate on the implications of this control for adaptability or how the agent responds to changes in the environment. Therefore, there is not enough information in the provided GNN content to answer the question."
},
{
"question": "How does the choice of prior distributions for hidden states (D_f0 and D_f1) affect the agent's performance and learning?",
"answer": "The GNN file does not provide sufficient information to directly assess how the choice of prior distributions for hidden states (D_f0 and D_f1) affects the agent's performance and learning. While it specifies that D_f0 is a uniform prior over two states and D_f1 is a uniform prior over three states, it does not elaborate on the implications of these choices on the agent's learning dynamics or performance outcomes. Additional context or empirical results would be needed to draw conclusions about their impact."
},
{
"question": "What implications does the unbounded time horizon have on the planning and decision-making capabilities of the Multifactor PyMDP Agent?",
"answer": "The GNN file indicates that the Multifactor PyMDP Agent has an unbounded time horizon, which implies that the agent is designed to operate continuously without a predefined endpoint for decision-making or planning. This allows the agent to adapt its strategies and learning processes over an indefinite period, potentially leading to more complex and long-term planning capabilities. However, the specifics of how this affects the agent's decision-making dynamics\u2014such as its ability to prioritize short-term versus long-term rewards or how it updates its beliefs and actions over time\u2014are not detailed in the GNN file. Therefore, while the unbounded time horizon suggests enhanced flexibility and adaptability in planning and decision-making, the exact implications remain unspecified in the provided content."
}
]pymdp_pomdp_agent_qa.json### Summary of the Multifactor PyMDP Agent GNN Model
**Model Name:** Multifactor PyMDP Agent v1
**Purpose:** This model represents a PyMDP (Partially observable Markov decision process) agent designed to handle multiple observation modalities and hidden state factors within an Active Inference framework. It aims to facilitate decision-making by modeling the interactions between observations, hidden states, and control mechanisms.
**Key Components:**
1. **Observation Modalities:**
- **State Observation:** 3 possible outcomes.
- **Reward:** 3 possible outcomes.
- **Decision Proprioceptive:** 3 possible outcomes.
2. **Hidden State Factors:**
- **Reward Level:** 2 possible states.
- **Decision State:** 3 possible states, which is controllable with 3 potential actions.
3. **State and Transition Matrices:**
- **A_matrices:** Likelihood matrices for each observation modality (3 matrices corresponding to the 3 modalities).
- **B_factors:** Transition matrices for hidden state factors, with B_f0 being uncontrolled and B_f1 controlled by a policy.
4. **Preference Vectors:**
- **C_vectors:** Preference vectors associated with each observation modality, indicating the agent's preferences for observed outcomes.
5. **Prior Distributions:**
- **D_factors:** Priors over the hidden states, indicating initial beliefs about the hidden states before observing any data.
**Main Connections:**
- **Hidden States to Observations:** The hidden states influence the likelihood of observing specific outcomes through the A_matrices.
- **Control Mechanism:** The action taken (u_f1) affects the transition dynamics of the decision state (B_f1), which in turn influences the next hidden state.
- **Expected Free Energy (G):** The expected free energy is derived from the preferences and drives the policy (π_f1), linking the agent's decision-making to its beliefs about the world.
- **Iterative Inference:** The model employs standard PyMDP equations for inferring states, inferring policies, and sampling actions, illustrating the dynamic nature of the decision-making process.
This model serves as a sophisticated framework for understanding and simulating agent behavior in environments with multiple sources of information and decision-making complexities.pymdp_pomdp_agent_summary.txt{
"model_purpose": "The model is designed for multi-agent trajectory planning in a 2D environment, incorporating obstacle avoidance, inter-agent collision avoidance, and goal-directed behaviors. It serves as a demonstration of the RxInfer.jl framework capabilities in simulating complex agent interactions in spatial settings.",
"key_components": {
"state_space_model": {
"description": "The state space model describes how agents transition between states based on their control inputs and the presence of noise.",
"components": {
"A": "State transition matrix",
"B": "Control input matrix",
"C": "Observation matrix"
}
},
"agents": [
{
"id": 1,
"initial_position": "(-4.0, 10.0)",
"target_position": "(-10.0, -10.0)",
"radius": 2.5
},
{
"id": 2,
"initial_position": "(-10.0, 5.0)",
"target_position": "(10.0, -15.0)",
"radius": 1.5
},
{
"id": 3,
"initial_position": "(-15.0, -10.0)",
"target_position": "(10.0, 10.0)",
"radius": 1.0
},
{
"id": 4,
"initial_position": "(0.0, -10.0)",
"target_position": "(-10.0, 15.0)",
"radius": 2.5
}
],
"obstacles": {
"door": [
{
"center": "(-40.0, 0.0)",
"size": "(70.0, 5.0)"
},
{
"center": "(40.0, 0.0)",
"size": "(70.0, 5.0)"
}
],
"wall": {
"center": "(0.0, 0.0)",
"size": "(10.0, 5.0)"
},
"combined": [
{
"center": "(-50.0, 0.0)",
"size": "(70.0, 2.0)"
},
{
"center": "(50.0, 0.0)",
"size": "(70.0, 2.0)"
},
{
"center": "(5.0, -1.0)",
"size": "(3.0, 10.0)"
}
]
}
},
"component_interactions": {
"state_space_model": {
"inputs": [
"dt",
"A",
"B",
"C"
],
"outputs": "agent_trajectories"
},
"agent_trajectories": {
"inputs": [
"initial_state_variance",
"control_variance"
],
"outputs": [
"goal_directed_behavior",
"obstacle_avoidance",
"collision_avoidance"
]
},
"goal_directed_behavior": {
"inputs": "goal_constraint_variance",
"output": "planning_system"
},
"obstacle_avoidance": {
"inputs": [
"gamma",
"gamma_shape",
"gamma_scale_factor"
],
"output": "planning_system"
},
"collision_avoidance": {
"inputs": "nr_agents",
"output": "planning_system"
}
},
"data_types_and_dimensions": {
"parameters": {
"dt": "float",
"gamma": "float",
"nr_steps": "int",
"nr_iterations": "int",
"nr_agents": "int",
"softmin_temperature": "float",
"intermediate_steps": "int",
"save_intermediates": "bool",
"initial_state_variance": "float",
"control_variance": "float",
"goal_constraint_variance": "float",
"gamma_shape": "float",
"gamma_scale_factor": "float"
},
"matrices": {
"A": "4x4 float",
"B": "4x2 float",
"C": "2x4 float"
},
"visualization": {
"x_limits": "2 float",
"y_limits": "2 float",
"fps": "int",
"heatmap_resolution": "int",
"plot_width": "int",
"plot_height": "int",
"agent_alpha": "float",
"target_alpha": "float",
"color_palette": "string"
},
"agent_data": {
"id": "int",
"radius": "float",
"initial_position": "2 float",
"target_position": "2 float"
},
"obstacle_data": {
"center": "2 float",
"size": "2 float"
}
},
"potential_applications": [
"Simulating multi-agent navigation in dynamic environments",
"Testing algorithms for real-time trajectory planning",
"Evaluating collision avoidance techniques in robotics",
"Research in swarm intelligence and cooperative behavior"
],
"limitations_or_ambiguities": [
"The file does not specify the exact nature of the noise in agent trajectories, which could affect simulation fidelity.",
"The interaction between agents during trajectory planning is described but not quantified, leaving ambiguity in collision avoidance behavior.",
"The visualization parameters are defined but not explicitly linked to the results, raising questions about output interpretation."
],
"ontology_mapping_assessment": {
"ActInfOntologyTerms": [
"TimeStep",
"ConstraintParameter",
"TrajectoryLength",
"InferenceIterations",
"NumberOfAgents",
"SoftminTemperature",
"StateTransitionMatrix",
"ControlInputMatrix",
"ObservationMatrix",
"InitialStateVariance",
"ControlVariance",
"GoalConstraintVariance"
],
"relevance": "The ontology terms are relevant and appropriately map to the parameters and components of the model, enhancing clarity and standardization."
}
}rxinfer_multiagent_gnn_comprehensive_analysis.json[
{
"question": "What specific assumptions are made about the agents' behavior in terms of goal-directedness and collision avoidance, and how might different assumptions affect the trajectory planning outcomes?",
"answer": "The GNN file makes specific assumptions about the agents' behavior as follows:\n\n1. **Goal-Directedness**: The agents are assumed to have target positions they aim to reach, which is represented by the goal constraints in the model. The model uses a Gaussian distribution to represent the likelihood of an agent's final position being near its target.\n\n2. **Collision Avoidance**: The model includes collision avoidance constraints that ensure agents maintain a safe distance from one another based on their radii. This is represented by the equation that models the likelihood of two agents being a certain distance apart, factoring in their sizes.\n\nDifferent assumptions about these behaviors could significantly affect trajectory planning outcomes. For instance:\n\n- **If agents are assumed to be more aggressive in pursuing their targets (less cautious)**, they may end up colliding more frequently, which could lead to less efficient trajectories and potential failures in reaching their goals.\n \n- **Conversely, if agents are overly cautious (more conservative)**, they may take longer paths to avoid collisions, resulting in delays in reaching their targets and potentially suboptimal trajectories.\n\nOverall, the balance between goal-directed behavior and collision avoidance is crucial in determining the effectiveness of the trajectory planning model."
},
{
"question": "How does the choice of the softmin_temperature parameter influence the agents' decision-making processes during trajectory planning?",
"answer": "The provided GNN file does not contain explicit information on how the choice of the softmin_temperature parameter influences the agents' decision-making processes during trajectory planning. Therefore, I cannot provide a specific answer regarding its impact based solely on the content of the GNN file."
},
{
"question": "In what ways do the obstacle definitions impact the feasibility of the planned trajectories, and how could this model be adapted for more complex environments?",
"answer": "The GNN file outlines specific obstacle definitions that impact the feasibility of planned trajectories in the multi-agent trajectory planning model. These obstacles, defined by their centers and sizes, create constraints that agents must navigate around, thus affecting their path planning and movements. The model incorporates obstacle avoidance constraints that probabilistically account for the distance from agents to obstacles, influencing trajectory optimization.\n\nTo adapt this model for more complex environments, additional obstacle types could be introduced, such as dynamic obstacles that move during the simulation. Furthermore, the model could incorporate varying obstacle shapes and sizes, or even include environmental features like terrain that affect movement. Adding more sophisticated collision avoidance algorithms or integrating real-time sensing and adaptation mechanisms could also enhance the model's applicability in complex scenarios."
},
{
"question": "What are the implications of the prior distributions on initial state and control variances for the overall performance and reliability of the trajectory planning?",
"answer": "The GNN file specifies prior distributions for the initial state variance and control variance as follows:\n\n- **Initial State Variance**: Set to 100.0, indicating a high uncertainty in the initial positions of agents. This large variance can lead to less reliable trajectory planning, as it suggests that agents' starting positions could be far from their actual locations, potentially resulting in inefficient trajectories and increased risk of collision.\n\n- **Control Variance**: Set to 0.1, which implies relatively low uncertainty in the control inputs applied to the agents. This low variance suggests that control actions are expected to be reliable, allowing for more consistent movement towards targets. However, if the initial state variance is too high, even precise control may not suffice to ensure effective trajectory planning.\n\nOverall, the high initial state variance can negatively impact the performance of the trajectory planning by introducing significant uncertainty in agents' starting conditions, while the low control variance contributes positively by ensuring that the agents can reliably execute their planned trajectories. The combination implies that the trajectory planning system may struggle to perform optimally unless the initial state uncertainty is reduced."
},
{
"question": "How does the model handle variations in the number of agents, and what challenges might arise as the number of agents increases in terms of computation and collision avoidance?",
"answer": "The model handles variations in the number of agents through a parameter `nr_agents`, which specifies the number of agents in the simulation. This parameter is integrated into the connections where it influences the collision avoidance mechanism, indicating that the model adjusts its computations based on the number of agents present.\n\nChallenges that might arise as the number of agents increases include:\n\n1. **Computation Complexity**: The computational load may increase significantly due to the need for more complex calculations involving all agents, particularly for the collision avoidance constraints, which require pairwise evaluations of agents' positions.\n\n2. **Collision Avoidance**: As the number of agents grows, the likelihood of collisions increases. The model must effectively manage interactions between a larger number of agents, which can complicate the avoidance strategies and require more sophisticated algorithms to ensure safety and efficiency.\n\nThe GNN file does not provide explicit details on how these challenges are addressed, leaving the specifics of their management open to interpretation."
}
]rxinfer_multiagent_gnn_qa.json### Summary of the GNN Model: Multi-agent Trajectory Planning
**Model Name:** Multi-agent Trajectory Planning
**Purpose:**
This model is designed for simulating trajectory planning of multiple agents in a 2D environment using the RxInfer.jl framework. It incorporates obstacle avoidance, goal-directed behavior, and inter-agent collision avoidance to effectively manage the agents' movements in complex environments.
**Key Components:**
1. **State Space Model:**
- **State Transition Matrix (A):** Defines how agent states evolve over time.
- **Control Input Matrix (B):** Represents how control inputs affect the agents' states.
- **Observation Matrix (C):** Maps the state variables to observable outputs.
- **Model Parameters:** Includes time step (`dt`), constraint parameters (`gamma`), number of time steps (`nr_steps`), number of agents (`nr_agents`), and variance settings for initial state, control inputs, and goal constraints.
2. **Agent Configurations:**
- Each agent has specific attributes such as ID, radius, initial position, and target position. This includes four agents, each with unique characteristics.
3. **Environment Definitions:**
- Various obstacles are defined, including door obstacles, wall obstacles, and combined obstacles, which influence the agents' trajectories.
4. **Visualization Parameters:**
- Settings for visualizing results, including plot boundaries, heatmap resolution, and color settings.
**Main Connections:**
- The model's dynamics are established through connections between components:
- The state transition (`dt > A`) and the matrices (`A, B, C > state_space_model`) form the core of the state space model.
- Agent trajectories are influenced by initial state variance and control variance, linking to `goal_directed_behavior` and `obstacle_avoidance`.
- Collision avoidance is addressed through the connections between agent trajectories and the number of agents, culminating in a complete planning system that integrates goal-directed behavior, obstacle avoidance, and collision avoidance.
**Overall Model Structure:**
This GNN model effectively simulates the behavior of multiple agents navigating a 2D space, accounting for dynamic interactions with each other and their environment, while ensuring that they avoid obstacles and achieve their designated goals.rxinfer_multiagent_gnn_summary.txt